aboutsummaryrefslogtreecommitdiffstats
path: root/Source/TableDump.h
Commit message (Collapse)AuthorAgeFilesLines
* Rewrite CSV import:rowanbeentje2009-09-281-6/+5
| | | | | | | | | | | | | - Replace the CSV parsing function (arrayForCSV:) with a new SPCSVParser class - Make speed improvements to SPCSVParser to achieve 1.9x faster parsing than the old arrayForCSV: function - Rewrite CSV imports to be performed as a streaming import, keeping memory usage much much lower - CSV field mapping preview is now shown very early on in the import process, as soon as the first hundred rwos are available for a preview - Progress bars are more consistent and accurate - CSV rows are grouped into batches of up to 50 (depending on line length) for import, falling back to one-query-per-row if errors occur. The current error reporting level is therefore maintained, but imports of non-erroring data are much much faster. - Improve processing speed slightly - Fix some odd edge cases in CSV parsing This addresses issue #389.
* • if the user chose to open a SQL file which is larger than 1MB SP asks ↵Bibiko2009-09-261-2/+3
| | | | for confirmation, furthermore if a connection is available the user can choose 'Import' instead of loading it into the Query Editor for cases that an user invoked 'Open…' accidentally instead of 'Import…'
* Significantly improve export:rowanbeentje2009-09-141-4/+7
| | | | | | | | - Rework CSV export to stream data, significantly reducing memory consumption and so increasing speed and stability when exporting large tables. By default safe/fast streaming is used, but a checkbox is available to select "low memory mode" full streaming, allowing export of any size table in theory. This addresses Issue #224. - Rework XML export to stream data in the same way, also significantly reducing memory usage and providing the option of using low memory mode. - Make SQL, CSV and XML export progress bars update more smoothly - When exporting the current browse view or custom query result, show an indeterminate progress bar when copying large resultsets to avoid the app appearing to hang
* Refactor CSV/SQL import structure slightly, and rewrite SQL import:rowanbeentje2009-08-311-0/+2
| | | | | | | - SQL import now reads and processes files in full streaming mode, running queries as they are encountered - Memory usage during import is significantly reduced, and should stay within a few megabytes; the significant memory use remaining is for query logging - The progress bar more accurately represents progress and is shown at once (this addresses Issue #320)
* - Change MCPStreamingResult to use a safer streaming mode by default - ↵rowanbeentje2009-08-201-0/+1
| | | | | | | download all results as fast as possible from the server, to avoid blocking, but do so in a background thread to allow results processing to start as soon as data is available. Many thanks to Hans-Jörg Bibiko for assistance with this. - Add an option to the SQL export dialog to allow selection of the full-streaming method, with a warning that it may block table UPDATES/INSERTS.
* Source tidy up and missing SVN properties.stuconnolly2009-08-071-8/+4
|
* Improve TablesList significantly:rowanbeentje2009-07-281-1/+0
| | | | | | | | - If there are twenty or more tables, show a table quicksearch/filter at the top of the list, and update the rest of the code to match. This addresses issue #178. - Select tables and views alphabetically by user's current locale (instead of default MySQL "A B C a b c") - When adding or duplicating tables, insert them at the correct point - Fix a number of minor display bugs caused by incorrect interaction with the tables list caches
* Merge framework integration branch back to trunk. Summary of changes:stuconnolly2009-07-211-10/+7
| | | | | | | | | | | | | | | - Includes all custom code from subclasses CMMCPConnection and CMMCPResult, meaning they have subsequently been removed from the project. - All previous Sequel Pro specific code in the above subclasses has been removed in favour of the delegate (currently set to TableDocumet) informing the framework of such information. - All references to CMMCPConnection and CMMCPResult have subsequently been changed to MCPConnection and MCPResult. - Framework includes MySQL 5.1.36 client libraries and source headers. - Framework is now built as a 4-way (32/64 bit, i386/PPC arch) binary. - All import references to <MCPKit_bundled/MCPKit_bundled.h> have been changed to <MCPKit/MCPKit.h>. - New script 'build-mysql-client.sh' can be used to build the MySQL client libraries from the MySQL source. See the script's header for a list of available options or run it with no arguments to display it's usage. Note that there are still a few changes to be made to the framework with regard to removing Sequel Pro specific calls to the delegate. These however can be made later on as they have no effect on functionality and are merely design changes. Also, note that any future development done on the framework should be made to be as 'generic' as possible, with no Sequel Pro specific references. This should allow the framework to be integrated into another project without the need for SP specific code.
* - Make the DBView window the document window. This allows the document to ↵rowanbeentje2009-07-151-2/+0
| | | | | | | | | | be closed when the window is closed, freeing the document's memory - Update a number of dealloc methods to include more retained memory, and to avoid releasing non-retained memory - Remove notification observers and delegates where appropriate to avoid issues after document closing - Fix a couple of memory leaks - Support window cascading for all windows past the first, using the first window as the autosave window
* - Update the import/export progress sheet title to reflect the current activityrowanbeentje2009-07-061-0/+1
| | | | | - Fix mutliple-table CSV and XML export when a view is selected - data for the view is now correctly exported
* - Correctly SQL export views with interdependencies on other views or ↵rowanbeentje2009-06-271-0/+1
| | | | tables, resolving Issue #313
* - added schema export to basic graphviz dot filemtvee2009-06-051-0/+1
|
* More header updates for source files, including Subversion Id property.stuconnolly2009-05-191-0/+2
|
* - part 4 of merge from 'avenjamin' branch into trunk.avenjamin2009-04-101-1/+12
| | | | - committing Source
* Fixed bug where exporting current table would use the field terminator, ↵avenjamin2009-03-271-1/+2
| | | | enclosure, escape and line ending characters from the "export multiple tables" dialog instead.
* Alter the open panel to recognise .csv and .sql extensions on selected files ↵rowanbeentje2009-03-041-0/+3
| | | | and automatically change the format dropdown to match
* Visible improvements in this build:rowanbeentje2009-02-181-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Significantly reduce the queries that have to be performed, improving lag - especially over slow connections (Issue #118; see new controller info under headline code changes). - Fix Issue #117 properly (export numeric quoting - we now have access to column types and so can quote appropriately). - Fix Issue #145 (loss of unsigned/null/default attributes when reordering columns). - Fixes Issue #90 (support for filtering DECIMAL column types) - Improve table scrolling speed when the table contains long items. (Added a NSFormatter to automatically truncate strings > 150 chars for display purposes only) - Improved SQL compatibility - for example /* C style comments */ are now correctly ignored in imports and custom queries. - Add text and symbols emphasising that the table info pane / status view row count is an approximation (partially addresses Issue #141) - Fixes a major memory leak whenever opening or scrolling tables containing text/blob data. - SQL import is now faster (SQL parsing part is 3x faster). - Speed up SQL export (1.5x faster for numeric data; 1.1x faster for string data) and slightly speed up CSV export (~1.1x faster). - Display sizes on the status view using the byte size formatter, as per table info pane. Headline code changes: - Add a new NSMutableString subclass, SPSQLParser. See the header file for documentation and overview, but in short it's a centralised place for SQL parsing. Centralises and improves parsing, improves comment support, improves quoting support. Despite the improved featureset this is also faster than the previous distributed implementations - for example, when used to replace the old splitQueries:, > 3x speedup. - Implement a new controller which handles a structure and status cache for the current table, and provides structure parsing for specified tables. This cache is now used throughout the code, reducing the queries that have to be performed and providing additional information about the table structure for use; I think it also improves column type format slightly. - The table info pane and the status view now draw all their data from the cache. Tweaks: - Table encoding is now detected directly instead of being derived from the collation - increased accuracy and cope with the DEFAULT encoding. - Comments and formatting cleaned up in bits I was working on, obviously. - A couple of methods - particularly [tablesListInstance table] and [tableDocument encoding] - have been renamed to avoid conflicts and fix code warnings. Future improvements now possible: - As we now have access to column types and other information, we can provide per-type behaviour where desired. - The table parsing doesn't currently pull out comments or table indices, together with one or two other attributes. Some of this would be useful for display; some, such as indices, could be used to draw the table structure view as long as we're happy discarding a couple of columns (ie cardinality!)
* #144 value of the "import format" popup button is now remembered in the ↵abhibeckert2009-01-141-1/+1
| | | | import panel
* Added cancel button to the import/export progress sheet. Also added ↵mltownsend2009-01-041-13/+2
| | | | importing on a separate thread.
* MERGED r262:266 from branches/stuart02 to trunk to include new project ↵stuconnolly2008-12-101-0/+156
structure.