| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
| |
- Prepare for the implementation of PDF and HTML export by hiding the export options box for export types that don't need it.
- Fix some spacing issues on the export dialog.
- Create a new SPCategoryAdditions header that is included in the apps precompiled header, making all additions available to all classes.
- Update strings files.
|
|
|
|
|
|
|
|
| |
- if a parsed row in the csv file doesn't have the same number of columns as the first row fill the missing columns with SPNotLoaded to allow while importing that these missing data can be replaced by the table column's DEFAULT value
- fixed tiny issue for field mapper sheet to display the correct tooltip for default values
• SPTableData
- ATTENTION: changed the object for returned key 'default': if its value is NULL now it returns a [NSNull null] object
- changed instances to handle this [NSNull null] object (must be checked)
|
|
|
|
|
|
|
|
|
| |
crashes: prevent multiple disconnects, add more checks, cancel current queries, and add a tiny delay to allow mysql cleanup.
- Alter MCPStreamingResult to no longer return a retained instance, setting up correct result disposal on autorelease but changing callers to retain as soon as they receive.
- Review and change a number of local variables shadowing/shielding other local or global variables.
|
|
|
|
|
|
|
| |
release builds, including a large number of 64bit compatibility upgrades and tweaks
- Upgrade RegexKitLite to 3.3
|
|
- Replace the CSV parsing function (arrayForCSV:) with a new SPCSVParser class
- Make speed improvements to SPCSVParser to achieve 1.9x faster parsing than the old arrayForCSV: function
- Rewrite CSV imports to be performed as a streaming import, keeping memory usage much much lower
- CSV field mapping preview is now shown very early on in the import process, as soon as the first hundred rwos are available for a preview
- Progress bars are more consistent and accurate
- CSV rows are grouped into batches of up to 50 (depending on line length) for import, falling back to one-query-per-row if errors occur. The current error reporting level is therefore maintained, but imports of non-erroring data are much much faster.
- Improve processing speed slightly
- Fix some odd edge cases in CSV parsing
This addresses issue #389.
|