aboutsummaryrefslogtreecommitdiffstats
path: root/Source/SPCSVParser.m
Commit message (Collapse)AuthorAgeFilesLines
* • CSV ImportBibiko2010-03-221-2/+4
| | | | | | | | - if a parsed row in the csv file doesn't have the same number of columns as the first row fill the missing columns with SPNotLoaded to allow while importing that these missing data can be replaced by the table column's DEFAULT value - fixed tiny issue for field mapper sheet to display the correct tooltip for default values • SPTableData - ATTENTION: changed the object for returned key 'default': if its value is NULL now it returns a [NSNull null] object - changed instances to handle this [NSNull null] object (must be checked)
* - Make a number of changes to attempt to improve disconnection/quit ↵rowanbeentje2010-03-161-3/+2
| | | | | | | | | crashes: prevent multiple disconnects, add more checks, cancel current queries, and add a tiny delay to allow mysql cleanup. - Alter MCPStreamingResult to no longer return a retained instance, setting up correct result disposal on autorelease but changing callers to retain as soon as they receive. - Review and change a number of local variables shadowing/shielding other local or global variables.
* - Upgrade Sequel Pro to be compiled as a 3-way PPC/i386/x86_64 binary for ↵rowanbeentje2010-01-091-6/+6
| | | | | | | release builds, including a large number of 64bit compatibility upgrades and tweaks - Upgrade RegexKitLite to 3.3
* Rewrite CSV import:rowanbeentje2009-09-281-0/+642
- Replace the CSV parsing function (arrayForCSV:) with a new SPCSVParser class - Make speed improvements to SPCSVParser to achieve 1.9x faster parsing than the old arrayForCSV: function - Rewrite CSV imports to be performed as a streaming import, keeping memory usage much much lower - CSV field mapping preview is now shown very early on in the import process, as soon as the first hundred rwos are available for a preview - Progress bars are more consistent and accurate - CSV rows are grouped into batches of up to 50 (depending on line length) for import, falling back to one-query-per-row if errors occur. The current error reporting level is therefore maintained, but imports of non-erroring data are much much faster. - Improve processing speed slightly - Fix some odd edge cases in CSV parsing This addresses issue #389.