| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
- fixed precocious releasing of mapper settings
- fixed boolean binding for displaying "1 of first 100 records"
- added further gui elements (not yet activated)
- sheet dimensions are now auto-saved
- bound keystroke ⇢ and ⇠ to row stepper
- renamed some stuff
- added clarification notes
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- changed the way to choose whether a source field should be imported or not by introducing a new table column 'operators'
- clicking at the 'operator's header toggles all operators to 'Import' or 'Do not import'
- added tooltips for each table cell; if file's first line are the headers show them in the tooltips as well
- added checkbox "First line contains fields names" since it'll be clear in this pane whether a file has a header line or not (will be sync with prefs)
- added the possibility to choose the import method: INSERT INTO or REPLACE INTO
• deleted all old field mapper stuff from TableDump and DBView.xib
Notes:
- tests are needed to be sure that this change does not cause mismatches while importing
- symbols for Do (not) import are tendative - maybe use images
- a further import method UPDATE plus an operator '=' will be added soon
- chance to add a new global source variable will come soon
- displaying of source field types will come soon
- semi-automatically matching of source field names and header names will come soon
- the GUI needs some improvements afterwards
|
|
|
|
| |
inspection of leaks and Clang static analysis.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
functions (off by default). Remove the forked "...Sql..." functions, as they're now duplicates, and switch CustomQuery to using the original methods.
- TableDump imports can now process DELIMITERs correctly as a result.
- Alter the TableDump display of tables etc to use TablesList as the source of information, and used cached lists where appropriate for a small speedup. Also means we gain consistent sorting.
- Display procedures and functions in the toggleable list when exporting as SQL
- Tweak the procedure and function export to only export selected items, and also to respect the "export drop syntax" and "export create syntax" checkboxes
- Fix a crash when removing items from the TablesList resulted in an errorneous selection by deselecting all rows before deleting (and preemptively applying the same fix to TableContent)
|
|
|
|
|
|
|
| |
• if an error occurred while retrieving column or index data in Structure pane reset Structure pane to a stable status, display the error message, and reload Tables List table due to the fact the it's very likely that SP tries to retrieve data from a table which doesn't exist anymore
• fixed spelling of "occurred"
Note: NOT YET DONE: if in Structure view the actual underlying table was deleted or renamed by an other mysql event and the user tries to add/change a field do suppress this attempt safely
|
|
|
|
| |
linebreaks if they are within a quoted cell, improving compatibility with other applications (notably Excel)
|
|
|
|
|
|
| |
- Fix incorrect uses of [NSString stringWithFormat:] with preconstructed strings and no arguments in SPUserManager
- To fix display issues, replace NSBeginAlertSheet (which includes automatic sprintf expansion of the message) with a safely-escaped SPBeginAlertSheet in many files
|
|
|
|
|
|
| |
- added support for dump functions. If no errors are found, Issue 517 can be closed.
|
|
|
|
|
|
| |
- included export for proceduers. Need to add support for functions and test on different databases.
|
|
|
|
| |
changes of the accessory view (ie avoid disappearing of selected items)
|
|
|
|
|
|
|
| |
I've moved the export triggers part after the [streamResult release], so we
can export triggers for empty tables too.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
First draft on export triggers.
Known issues:
- Now it exports only triggers for the table that have some content in it.
If I move the write triggers part outside of the if (rowCount), the
SHOW TRIGGERS query never finishes.
- Sequel Pro complains when it tries to import the same file again. The
error message is related to DELIMITER command. Still, the triggers seem to
be present. This might be a different issue with Sequel Pro.
|
|
|
|
|
|
|
| |
release builds, including a large number of 64bit compatibility upgrades and tweaks
- Upgrade RegexKitLite to 3.3
|
|
|
|
|
|
|
| |
to avoid binary-mode result issues with certain versions of MySQL (including 4.1.14). This should address Issue #509.
- TableDocument now requests the server version string from MCPConnection, aiding caching
|
|
|
|
| |
memory leaks and fixing a couple of over-releases
|
|
|
|
| |
This completes the conversion of all constants in SPConstants to extern's.
|
|
|
|
|
|
|
| |
data rows or slow connections
- Remove a debug NSLog on "Copy as SQL insert"
|
|
|
|
| |
to "Do not import"
|
|
|
|
|
|
|
| |
- Exports are now threaded, allowing use of other windows during the export, and can be cancelled (which deletes the partially written export file) including use of the new query cancellation support.
- Correct the text of XML exports and improve progress feedback for XML exports.
- Fix .dot export of tables including foreign keys to use the new foreign key formats.
|
|
|
|
| |
consistent across all table views. Also includes live updating when the preference is changed as well as its implementation in the query console, process list and variables table views.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
tied to table list selection. Much faster selection of table to import into if, eg, content view is selected. Fixes weird crashes.
- No longer display views as import targets
- When one import has been cancelled, still allow new imports
- Improve error reporting
- No longer re-sort table/view/etc names returned by TablesList as preferred order is being used for display and the default compare: reverts this
|
|
|
|
| |
queries, check that the current connection is active and if not bail from the method.
|
| |
|
|
|
|
|
|
|
|
|
| |
dump file
- this should simplify the loading of such a file in other text editors
- to avoid the writing of that BOM one can add to Sequel Pro's preference plist the boolean key “NoBOMforSQLdumpFile” and set it to YES
Note: If it turns out that this BOM causes problems one could add a checkbox for it to the NSSavePanel for instance later on.
|
|
|
|
| |
issues such as the one fixed in revision 1419. All future preference usage should be done so using these constants.
|
| |
|
|
|
|
|
|
| |
mainThread to avoid crashes if SP shows the data in the Content Browser in which the data will be imported.
[It occurred sometimes that the current table should be updated but no data were available due to threading.]
|
|
|
|
|
|
| |
values after re-invoking it
- store the accessory settings in SP's preferences
|
|
|
|
| |
- now the parser will be initialized by the import dialog's settings (was unintentionally deleted)
|
|
|
|
|
|
|
|
|
|
| |
functionality:
- Growls are now only shown by default if they are not fired from the frontmost window
- Long-running tasks (>3 secs) will still Growl
- Clicking on a Growl will now bring the associated window to the front
This addresses the original concerns of Issue #98.
|
|
|
|
|
|
| |
- Fix the Growl notification prefs message as well as making the dialog a sheet.
- Re-run genstrings to update localizable.strings and also remove use of multiple comments for a single string.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Replace the CSV parsing function (arrayForCSV:) with a new SPCSVParser class
- Make speed improvements to SPCSVParser to achieve 1.9x faster parsing than the old arrayForCSV: function
- Rewrite CSV imports to be performed as a streaming import, keeping memory usage much much lower
- CSV field mapping preview is now shown very early on in the import process, as soon as the first hundred rwos are available for a preview
- Progress bars are more consistent and accurate
- CSV rows are grouped into batches of up to 50 (depending on line length) for import, falling back to one-query-per-row if errors occur. The current error reporting level is therefore maintained, but imports of non-erroring data are much much faster.
- Improve processing speed slightly
- Fix some odd edge cases in CSV parsing
This addresses issue #389.
|
|
|
|
| |
for confirmation, furthermore if a connection is available the user can choose 'Import' instead of loading it into the Query Editor for cases that an user invoked 'Open…' accidentally instead of 'Import…'
|
|
|
|
| |
DEFAULT NULL, but don't allow definition of tables as that - fix the export of temporary placeholder tables for those views. Thanks to Andreas Falk for report and details.
|
|
|
|
|
|
|
|
| |
- Rework CSV export to stream data, significantly reducing memory consumption and so increasing speed and stability when exporting large tables. By default safe/fast streaming is used, but a checkbox is available to select "low memory mode" full streaming, allowing export of any size table in theory. This addresses Issue #224.
- Rework XML export to stream data in the same way, also significantly reducing memory usage and providing the option of using low memory mode.
- Make SQL, CSV and XML export progress bars update more smoothly
- When exporting the current browse view or custom query result, show an indeterminate progress bar when copying large resultsets to avoid the app appearing to hang
|
|
|
|
|
|
| |
written to file [ this fixes issue 404 ]
• FieldEditorSheet: if string cell content is "NULL" select the entire string for convenience
|
|
|
|
|
|
|
| |
- Add defaults for fine-grained logging preferences
- Add a method to TableDocument to allow setting the query mode, and use the query mode to control logging
- Set import/export and custom query to set the appropriate query modes
|
|
|
|
|
|
|
| |
- SQL import now reads and processes files in full streaming mode, running queries as they are encountered
- Memory usage during import is significantly reduced, and should stay within a few megabytes; the significant memory use remaining is for query logging
- The progress bar more accurately represents progress and is shown at once (this addresses Issue #320)
|
|
|
|
| |
comments for a single string.
|
|
|
|
|
|
|
| |
download all results as fast as possible from the server, to avoid blocking, but do so in a background thread to allow results processing to start as soon as data is available. Many thanks to Hans-Jörg Bibiko for assistance with this.
- Add an option to the SQL export dialog to allow selection of the full-streaming method, with a warning that it may block table UPDATES/INSERTS.
|
|
|
|
|
|
| |
I had to cheat and name the file %@.csv, so the suggested filetype is csv, but you can change it to something else.
|
| |
|
| |
|
|
|
|
|
|
|
| |
- Added an MCPStreamingResult class to MCPKit, to allow streaming results from the server including fast array access of each row
- Tweak SQL export to use the streaming result class and to keep memory usage lower
End result is generally faster exports, more accurate progress bars, and much much lower (and consistent) memory usage.
|
|
|
|
|
|
|
|
|
| |
- Data loading now only occurs in one place in the code. This improves consistency and fixes a number of actions which used to trigger a full table reload followed instantly by a filter when the action was performed
- If "Reload data after..." prefs are unticked, no longer load the data (ie the preference now works)
- Make table count text more consistent and useful
- Fix a number of small position-saving type problems with filters and limits active. This fixes Issue #200.
- Clean up and standardise the code dealing with data storage - only one data storage array is now used.
|
| |
|
| |
|
|
|
|
|
|
|
| |
- When selecting CSVs SP will attempt to auto-detect the line endings to use
- Throw an error if the CSV to be imported appears to have more than 512 columns, usually due to wrong line ending/quote/etc selection
- Rewrite the CSV parser completely. New version correctly deals with CSV line terminators which are escaped or in enclosed content, correctly deals with quote strings which are the same as escape strings, and fixes a number of small edge cases. Performance on very long complex strings is slightly slower (~1.5 slower on long strings) but on large but simple tables is faster (~2.2x faster); memory usage is ~1.3x as high but all autoreleased. This addresses Issue #252.
|