Synchronizing Data with gsqlcmd
gsqlcmd enables data synchronization across databases on different server platforms.
In this document, we'll explore several techniques for achieving this.
You can find the source code in the Examples\Sync Databases
folder of the downloaded package. Additionally, you can test any batch file with a demo sample hosted in Microsoft Azure SQL Database.
Contents
- Basics
- Two-way synchronization using GUID identifiers
- Using SQL Server rowversion/timestamp columns to synchronize new and changed rows
- Using last update columns to synchronize new and changed rows
- Complete synchronization with deletes
- Complete synchronization across multiple editable databases with integer key columns
- Improving and unifying solutions
- Final Notes
Basics
Let's say we have two databases, db1 and db2, each containing a customers table: table11 in db1 and table12 in db2.
CREATE TABLE [s61].[table11] ( [id] int IDENTITY(1,1) NOT NULL, [name] nvarchar(50) NOT NULL, CONSTRAINT [PK_table11] PRIMARY KEY ([id]) ); CREATE TABLE [s61].[table12] ( [id] int IDENTITY(1,1) NOT NULL, [name] nvarchar(50) NOT NULL, CONSTRAINT [PK_table12] PRIMARY KEY ([id]) );
The table11
contains the following records:
id | name |
---|---|
1 | Customer C1 |
2 | Customer C2 |
3 | Customer C3 |
4 | Customer C4 |
5 | Customer C5 |
To merge data from table11
to table12
, use the following batch file:
gsqlcmd exec db1 s61.table11 table11.csv gsqlcmd make db2 table11.csv table12.sql /table=s61.table12 /merge /insertIdentity gsqlcmd exec db2 table12.sql
The first exec command exports data from table11
to a CSV file:
id;"name" 1;"Customer C1" 2;"Customer C2" 3;"Customer C3" 4;"Customer C4" 5;"Customer C5"
The second make command generates merge commands for table12
:
UPDATE [s61].[table12] SET [name] = N'Customer C1' WHERE [id] = 1; UPDATE [s61].[table12] SET [name] = N'Customer C2' WHERE [id] = 2; UPDATE [s61].[table12] SET [name] = N'Customer C3' WHERE [id] = 3; UPDATE [s61].[table12] SET [name] = N'Customer C4' WHERE [id] = 4; UPDATE [s61].[table12] SET [name] = N'Customer C5' WHERE [id] = 5; GO SET IDENTITY_INSERT [s61].[table12] ON; INSERT INTO [s61].[table12] ([id], [name]) SELECT s.[id], s.[name] FROM (SELECT 1 AS [id], N'Customer C1' AS [name]) s LEFT OUTER JOIN [s61].[table12] t ON t.[id] = s.[id] WHERE t.[id] IS NULL; INSERT INTO [s61].[table12] ([id], [name]) SELECT s.[id], s.[name] FROM (SELECT 2 AS [id], N'Customer C2' AS [name]) s LEFT OUTER JOIN [s61].[table12] t ON t.[id] = s.[id] WHERE t.[id] IS NULL; INSERT INTO [s61].[table12] ([id], [name]) SELECT s.[id], s.[name] FROM (SELECT 3 AS [id], N'Customer C3' AS [name]) s LEFT OUTER JOIN [s61].[table12] t ON t.[id] = s.[id] WHERE t.[id] IS NULL; INSERT INTO [s61].[table12] ([id], [name]) SELECT s.[id], s.[name] FROM (SELECT 4 AS [id], N'Customer C4' AS [name]) s LEFT OUTER JOIN [s61].[table12] t ON t.[id] = s.[id] WHERE t.[id] IS NULL; INSERT INTO [s61].[table12] ([id], [name]) SELECT s.[id], s.[name] FROM (SELECT 5 AS [id], N'Customer C5' AS [name]) s LEFT OUTER JOIN [s61].[table12] t ON t.[id] = s.[id] WHERE t.[id] IS NULL; SET IDENTITY_INSERT [s61].[table12] OFF; GO PRINT 'Processed 5 total records'; GO
These SQL commands update names in the target table and insert new rows. The make command uses the /merge option to generate these commands and the /insertIdentity option to preserve identity values. The third exec command executes the commands against db2.
Solving Issues with SET IDENTITY_INSERT
To run the script with SET IDENTITY_INSERT on Microsoft SQL Server, the database connection user must have ALTER ON TABLE permission. Otherwise, you'll encounter an error: "Cannot find the object 's61.table12' because it does not exist or you do not have permissions."
For more details, refer to the documentation: SET IDENTITY_INSERT.
To resolve this issue, you can either use a different user for synchronization or create the target table without the IDENTITY option:
CREATE TABLE [s61].[table13] ( [id] int NOT NULL, [name] nvarchar(50) NOT NULL, CONSTRAINT [PK_table13] PRIMARY KEY ([id]) );
This method keeps the same technique, but the merge SQL commands will not include SET IDENTITY_INSERT.
If changing the identity column is not an option, consider the following solution.
Using Different ID Values in Source and Target Tables
If you don't need to keep ID values consistent between the two tables, you can exclude the identity columns during synchronization. Here's an example:
gsqlcmd exec db1 "SELECT name FROM s61.table11" table11.csv gsqlcmd make db2 table11.csv table14.sql /table=s61.table14 /keys=name /merge gsqlcmd exec db2 table14.sql
The first exec command exports data without the ID column:
"name" "Customer C1" "Customer C2" "Customer C3" "Customer C4" "Customer C5"
Note that this command uses an inline select: SELECT name FROM s61.table11
. This feature allows you to export specific columns and rename them as needed for the target table. You can also compute values for required columns in the target table.
The second command produces SQL commands to insert missing names into table12
:
INSERT INTO [s61].[table14] ([name]) SELECT s.[name] FROM (SELECT N'Customer C1' AS [name]) s LEFT OUTER JOIN [s61].[table14] t ON t.[name] = s.[name] WHERE t.[name] IS NULL; INSERT INTO [s61].[table14] ([name]) SELECT s.[name] FROM (SELECT N'Customer C2' AS [name]) s LEFT OUTER JOIN [s61].[table14] t ON t.[name] = s.[name] WHERE t.[name] IS NULL; INSERT INTO [s61].[table14] ([name]) SELECT s.[name] FROM (SELECT N'Customer C3' AS [name]) s LEFT OUTER JOIN [s61].[table14] t ON t.[name] = s.[name] WHERE t.[name] IS NULL; INSERT INTO [s61].[table14] ([name]) SELECT s.[name] FROM (SELECT N'Customer C4' AS [name]) s LEFT OUTER JOIN [s61].[table14] t ON t.[name] = s.[name] WHERE t.[name] IS NULL; INSERT INTO [s61].[table14] ([name]) SELECT s.[name] FROM (SELECT N'Customer C5' AS [name]) s LEFT OUTER JOIN [s61].[table14] t ON t.[name] = s.[name] WHERE t.[name] IS NULL; GO PRINT 'Processed 5 total records'; GO
The make command uses the /keys=name option to specify the columns for the JOIN conditions.
Using Import Mode
The one-way synchronization discussed above consists of three parts:
- Exporting data from the source table
- Creating SQL commands to update the target table
- Executing the SQL commands
You can create batch files using CSV files in the first two steps. This feature is available in all paid editions.
The gsqlcmd Enterprise edition supports an import mode that combines these three commands into a single command.
For example, the following command merges data from table11
to table13
:
gsqlcmd import db2 db1::s61.table11 /table=s61.table13 /merge
This command executes against the db2 connection while loading the source data from s61.table11
in the db1 database directly into memory, rather than into a CSV file like the command:
gsqlcmd exec db1 s61.table11 table11.csv
The import command with an inline select looks like this:
gsqlcmd import db2 "db1::SELECT name FROM s61.table11" /table=s61.table14 /keys=name /merge
You can easily switch from make to import mode in your commands. The upcoming examples will utilize import mode.
Two-way Synchronization using GUID Identifiers
In the previous examples, both tables used integer primary key columns, making one-way synchronization straightforward and reliable.
For two-way synchronization, using GUIDs as primary keys simplifies the process. Here’s how to declare the tables in Microsoft SQL Server:
CREATE TABLE [s61].[table21] ( [id] uniqueidentifier NOT NULL DEFAULT(newid()), [name] nvarchar(50) NOT NULL, CONSTRAINT [PK_table21] PRIMARY KEY ([id]) ); CREATE TABLE [s61].[table22] ( [id] uniqueidentifier NOT NULL DEFAULT(newid()), [name] nvarchar(50) NOT NULL, CONSTRAINT [PK_table22] PRIMARY KEY ([id]) );
The id
column uses the uniqueidentifier data type with a default value of newid(), allowing SQL Server to generate unique IDs for new rows.
To merge data from table21
to table22
and vice versa, use the following commands:
gsqlcmd import db2 db1::s61.table21 /table=s61.table22 /merge gsqlcmd import db1 db2::s61.table22 /table=s61.table21 /merge
This technique is simple and does not require additional tables or fields, but it is best suited for small to medium-sized tables since all data is loaded and updated each time.
Using SQL Server Rowversion/Timestamp Columns to Synchronize New and Changed Rows
Microsoft SQL Server provides the rowversion data type, also known as timestamp, which is typically used for version-stamping table rows. The server increments this value with every INSERT and UPDATE.
For more information, see the documentation: Rowversion.
Other database platforms can implement row versioning using triggers. The basic idea is:
- Import all rows once and save the maximum rowversion value.
- Synchronize data after the last saved rowversion value and update the maximum rowversion value again.
Here’s a table declaration with a rowversion (timestamp) field:
CREATE TABLE [s61].[table31] ( [id] uniqueidentifier NOT NULL DEFAULT(newid()), [name] nvarchar(50) NOT NULL, [ts] timestamp NOT NULL, CONSTRAINT [PK_table31] PRIMARY KEY ([id]) ); CREATE TABLE [s61].[table32] ( [id] uniqueidentifier NOT NULL DEFAULT(newid()), [name] nvarchar(50) NOT NULL, [ts] timestamp NOT NULL, CONSTRAINT [PK_table32] PRIMARY KEY ([id]) );
You can also create a table to store synchronization timestamps:
CREATE TABLE [s61].[sync_timestamps] ( [source] nvarchar(128) NOT NULL, [target] nvarchar(128) NOT NULL, [last_ts] binary(8) NOT NULL, [prev_ts] binary(8) NULL, CONSTRAINT [PK_sync_timestamps] PRIMARY KEY ([source], [target]) );
This table allows you to manage timestamps for all database tables and synchronization pairs.
Here’s a batch file to merge only new and updated rows from table31
to table32
:
gsqlcmd exec db1 update-sync-table31-to-table32.sql if errorlevel 1 goto :EOF gsqlcmd import db2 db1::select-table31-to-table32.sql /table=s61.table32 /merge if errorlevel 1 goto :EOF gsqlcmd exec db1 commit-sync-table31-to-table32.sql
update-sync-table31-to-table32.sql contains:
MERGE s61.sync_timestamps AS t USING (SELECT 'table31' AS [source], 'table32' AS [target], COALESCE(MAX(ts), 0x) AS ts FROM s61.table31) AS s ([source], [target], ts) ON (t.[source] = s.[source] AND t.[target] = s.[target]) WHEN MATCHED AND last_ts = prev_ts THEN UPDATE SET last_ts = s.ts WHEN NOT MATCHED THEN INSERT ([source], [target], last_ts) VALUES (s.[source], s.[target], s.ts);
The merge command inserts a row for the specified synchronization pair or updates the last_ts field. The sync_timestamps table might look like this:
source | target | last_ts | prev_ts |
---|---|---|---|
table31 | table32 | 0x00000000000084E9 | 0x00000000000084E8 |
The code updates the last_ts field only if last_ts and prev_ts are equal, preventing changes if the previous synchronization failed.
To commit the changes, the batch uses:
gsqlcmd exec db1 commit-sync-table31-to-table32.sql
commit-sync-table31-to-table32.sql contains:
UPDATE s61.sync_timestamps SET prev_ts = last_ts WHERE [source] = 'table31' AND [target] = 'table32'
The select-table31-to-table32.sql used to select new and changed rows only looks like this:
SELECT id, name FROM s61.table31 WHERE ts > COALESCE(( SELECT prev_ts FROM s61.sync_timestamps WHERE [source] = 'table31' AND [target] = 'table32' ), 0x)
This query selects rows with timestamp values greater than the prev_ts field committed in the last synchronization.
This section demonstrated how to synchronize only new and changed rows using row version columns. The main advantage is minimal and safe changes to the underlying tables. In SQL Server, you only need to add a single rowversion column to the tables.
Using Last Update Columns to Synchronize New and Changed Rows
Another approach to selecting new and updated rows is to use columns that track creation or last update times. The challenge is to create these fields and triggers for all tables to be synchronized.
Here’s an example of a table and its triggers for Microsoft SQL Server:
CREATE TABLE [s61].[table41] ( [id] uniqueidentifier NOT NULL DEFAULT(newid()), [name] nvarchar(50) NOT NULL, [last_update] datetime NOT NULL DEFAULT(getutcdate()), CONSTRAINT [PK_table41] PRIMARY KEY ([id]) ); CREATE TRIGGER [s61].[trigger_table41_after_insert] ON [s61].[table41] AFTER INSERT AS BEGIN SET NOCOUNT ON UPDATE s61.table41 SET last_update = GETUTCDATE() FROM s61.table41 t INNER JOIN inserted ON inserted.id = t.id END; CREATE TRIGGER [s61].[trigger_table41_after_update] ON [s61].[table41] AFTER UPDATE AS BEGIN SET NOCOUNT ON UPDATE s61.table41 SET last_update = GETUTCDATE() FROM s61.table41 t INNER JOIN deleted ON deleted.id = t.id END;
To track the last synchronization time for each pair, you can use a table like this:
CREATE TABLE [s61].[sync_last_updates] ( [source] nvarchar(128) NOT NULL, [target] nvarchar(128) NOT NULL, [last_update] datetime NOT NULL, [prev_update] datetime NULL, CONSTRAINT [PK_sync_last_updates] PRIMARY KEY ([source], [target]) );
Here’s a batch file to synchronize data from table41
in db1 to table42
in db2:
gsqlcmd exec db1 update-sync-table41-to-table42.sql if errorlevel 1 goto :EOF gsqlcmd import db2 db1::select-table41-to-table42.sql /table=s61.table42 /merge if errorlevel 1 goto :EOF gsqlcmd exec db1 commit-sync-table41-to-table42.sql
The first command saves the last_update value to be committed in the last batch command if there are no errors.
update-sync-table41-to-table42.sql contains:
MERGE s61.sync_last_updates AS t USING (SELECT 'table41' AS [source], 'table42' AS [target], COALESCE(MAX(last_update), 0) AS last_update FROM s61.table41) AS s ([source], [target], last_update) ON (t.[source] = s.[source] AND t.[target] = s.[target]) WHEN MATCHED AND t.last_update = t.prev_update THEN UPDATE SET last_update = s.last_update WHEN NOT MATCHED THEN INSERT ([source], [target], last_update) VALUES (s.[source], s.[target], s.last_update);
The commit-sync-table41-to-table42.sql used in the last batch command is straightforward:
UPDATE s61.sync_last_updates SET prev_update = last_update WHERE [source] = 'table41' AND [target] = 'table42'
The select-table41-to-table42.sql used to select rows added or updated after the last synchronization time saved in the prev_update field looks like this:
SELECT id, name FROM s61.table41 WHERE last_update > COALESCE(( SELECT prev_update FROM s61.sync_last_updates WHERE [source] = 'table41' AND [target] = 'table42' ), 0)
Complete Synchronization with Deletes
The previous sections discussed adding and updating data from the source to target tables only. This approach works if you can hide rows instead of deleting them.
To replicate row deletions, you need to track deleted identifiers. One simple method is to create a single table to store deleted identifiers and add triggers that save these identifiers during delete operations.
Here’s a sample table for deleted identifiers:
CREATE TABLE [s61].[deleted_guids] ( [id] uniqueidentifier NOT NULL, [source] nvarchar(128) NOT NULL, [last_update] datetime NOT NULL, CONSTRAINT [PK_deleted_guids] PRIMARY KEY ([id]) );
Since GUID identifiers are globally unique, this table uses them as a primary key. The source column indicates which table the deleted row came from, while the last_update column records the deletion time for synchronization.
Here’s a sample trigger:
CREATE TRIGGER [s61].[trigger_table51_after_delete] ON [s61].[table51] AFTER DELETE AS BEGIN SET NOCOUNT ON INSERT INTO deleted_guids (id, [source], last_update) SELECT deleted.id, 'table51' AS [source], GETUTCDATE() AS last_update FROM deleted END;
Here’s a modified batch file to replicate deletions in the target table:
gsqlcmd exec db1 update-sync-table51-to-table52.sql if errorlevel 1 goto :EOF gsqlcmd import db2 db1::select-table51-to-table52.sql /table=s61.table52 /merge if errorlevel 1 goto :EOF gsqlcmd import db2 db1::select-table51-to-table52-deleted.sql /table=s61.table52 /delete if errorlevel 1 goto :EOF gsqlcmd exec db1 commit-sync-table51-to-table52.sql
This batch is similar to the previous ones but includes additional actions for deletions.
The update-sync-table51-to-table52.sql must save the last_update time for both tables, including deleted_guids:
MERGE s61.sync_last_updates AS t USING (SELECT 'table51' AS [source], 'table52' AS [target], COALESCE(MAX(last_update), 0) AS last_update FROM s61.table51) AS s ([source], [target], last_update) ON (t.[source] = s.[source] AND t.[target] = s.[target]) WHEN MATCHED AND t.last_update = t.prev_update THEN UPDATE SET last_update = s.last_update WHEN NOT MATCHED THEN INSERT ([source], [target], last_update) VALUES (s.[source], s.[target], s.last_update); MERGE s61.sync_last_updates AS t USING (SELECT 'deleted_guids' AS [source], 'table52' AS [target], COALESCE(MAX(last_update), 0) AS last_update FROM s61.deleted_guids) AS s ([source], [target], last_update) ON (t.[source] = s.[source] AND t.[target] = s.[target]) WHEN MATCHED AND t.last_update = t.prev_update THEN UPDATE SET last_update = s.last_update WHEN NOT MATCHED THEN INSERT ([source], [target], last_update) VALUES (s.[source], s.[target], s.last_update);
The final command must also commit the last time for both tables:
UPDATE s61.sync_last_updates SET prev_update = last_update WHERE [source] = 'table51' AND [target] = 'table52'; UPDATE s61.sync_last_updates SET prev_update = last_update WHERE [source] = 'deleted_guids' AND [target] = 'table52';
The select-table51-to-table52-deleted.sql used to select identifiers deleted from table51
after the last synchronization looks like this:
SELECT id FROM s61.deleted_guids WHERE last_update > COALESCE(( SELECT prev_update FROM s61.sync_last_updates WHERE [source] = 'deleted_guids' AND [target] = 'table52' ), 0) AND [source] = 'table51'
This command uses the /delete option instead of /merge:
gsqlcmd import db2 db1::select-table51-to-table52-deleted.sql /table=s61.table52 /delete
Complete Synchronization Across Multiple Editable Databases with Integer Key Columns
The most complex scenario involves synchronizing data across multiple editable databases with tables that use integer key columns. For instance, if you have a disconnected desktop application for salespersons that needs to sync with a central database, you cannot guarantee unique identifiers across databases.
A potential solution is to add columns to the source table to track its identifiers. For example:
CREATE TABLE [s61].[table71] ( [id] int IDENTITY(1,1) NOT NULL, [name] nvarchar(50) NOT NULL, [source_table] nvarchar(128) NULL, [source_row_id] int NULL, [last_update] datetime NULL, CONSTRAINT [PK_table71] PRIMARY KEY ([id]) );
The source_table column holds the name of the table where the row was originally added, while source_row_id contains the row ID. The source_table column can also include additional data, such as the database name or the salesperson's application computer name.
The insert trigger must automatically update these fields. Here’s a sample:
CREATE TRIGGER [s61].[trigger_table71_after_insert] ON [s61].[table71] AFTER INSERT AS BEGIN SET NOCOUNT ON IF USER_NAME() IN ('sample61_user2') RETURN UPDATE s61.table71 SET source_table = 'table71', source_row_id = t.id, last_update = GETUTCDATE() FROM s61.table71 t INNER JOIN inserted ON inserted.id = t.id END;
This trigger updates the source_row_id column with the initial ID generated by the database. To prevent changes to the source_table, source_row_id, and last_update values during synchronization, the trigger checks the username:
IF USER_NAME() IN ('sample61_user2') RETURN
The sample61_user2 username is used during synchronization operations, allowing data to be imported as-is. For other users, the triggers update the fields regardless of user values.
The update trigger must also be modified to retain the initial sync values of the row:
CREATE TRIGGER [s61].[trigger_table71_after_update] ON s61.table71 AFTER UPDATE AS BEGIN SET NOCOUNT ON IF @@NESTLEVEL > 1 RETURN; IF USER_NAME() IN ('sample61_user2') RETURN UPDATE s61.table71 SET source_table = deleted.source_table, source_row_id = deleted.source_row_id, last_update = GETUTCDATE() FROM s61.table71 t INNER JOIN deleted ON deleted.id = t.id END;
This trigger skips updates during synchronization operations executed under the sample61_user2 username, allowing updates to the last_update column values of imported rows.
Deleting rows presents a challenge as you need to delete records in other databases using source_table and source_row_id values. Here’s a sample table to track identifiers of deleted rows:
CREATE TABLE [s61].[deleted_ints] ( [id] int NOT NULL, [source] nvarchar(128) NOT NULL, [source_table] nvarchar(128) NOT NULL, [source_row_id] int NOT NULL, [last_update] datetime NOT NULL, CONSTRAINT [PK_deleted_ints] PRIMARY KEY ([id], [source]) );
This table uses id and source as a composite primary key to store values for any table in a database. It also contains source_table and source_row_id columns for synchronization.
The delete trigger for this table looks like this:
CREATE TRIGGER [s61].[trigger_table71_after_delete] ON s61.table71 AFTER DELETE AS BEGIN SET NOCOUNT ON INSERT INTO deleted_ints (id, [source], source_table, source_row_id, last_update) SELECT deleted.id, 'table71' AS [source], source_table, source_row_id, GETUTCDATE() AS last_update FROM deleted END;
The synchronization batch file looks like this:
gsqlcmd exec db1 update-sync-table71-to-table72.sql if errorlevel 1 goto :EOF gsqlcmd import db2 db1::select-table71-to-table72.sql /table=s61.table72 /merge /keys=source_table,source_row_id if errorlevel 1 goto :EOF gsqlcmd import db2 db1::select-table71-to-table72-deleted.sql /table=s61.table72 /delete /keys=source_table,source_row_id if errorlevel 1 goto :EOF gsqlcmd exec db1 commit-sync-table71-to-table72.sql
The commands are similar to those discussed earlier, but the import commands use the /keys=source_table,source_row_id option for the JOIN and WHERE conditions. Ensure you create indexes for these columns to enhance performance.
The update-sync-table71-to-table72.sql and commit-sync-table71-to-table72.sql files are identical to those in the previous section, except for the table names. The select-table71-to-table72.sql is similar, but the SELECT clause is modified:
SELECT name, source_table, source_row_id, last_update
This query excludes the local ID column but includes the global source_table and source_row_id. The select-table71-to-table72-deleted.sql contains updated SELECT and FROM clauses:
SELECT source_table, source_row_id FROM s61.deleted_ints WHERE last_update > COALESCE(( SELECT prev_update FROM s61.sync_last_updates WHERE [source] = 'deleted_ints' AND [target] = 'table72' ), 0) AND [source] = 'table71'
Improving and Unifying Solutions
The solutions presented above use batch files and SQL script files with hard-coded table names. This approach was intentional to illustrate the complexity step by step and highlight the differences in techniques.
You can consolidate this code into stored procedures and unify the synchronization batch file. For example, the batch file could look like this:
@echo off set schema=s61 set source=%1 set target=%2 gsqlcmd exec db1 "exec %schema%.usp_update_last_updates @source='%source%', @target='%target%'" if errorlevel 1 goto :EOF gsqlcmd import db2 "db1::exec %schema%.usp_select_upsert_rows @source='%source%', @target='%target%'" /table=%schema%.%source% /merge /keys=source_table,source_row_id if errorlevel 1 goto :EOF gsqlcmd import db2 "db1::exec %schema%.usp_select_delete_rows @source='%source%', @target='%target%'" /table=%schema%.%source% /delete /keys=source_table,source_row_id if errorlevel 1 goto :EOF gsqlcmd exec db1 "exec %schema%.usp_commit_last_updates @source='%source%', @target='%target%'"
All procedures accept the source and target tables as parameters, allowing you to call a single unified batch file with parameters to synchronize tables. The procedures generate SQL code to select the actual column names of the source tables.
You can find the source code in the Examples\Sync Databases
folder of the gsqlcmd download package.
Final Notes
Any synchronization process must include at least three steps:
- Retrieving data from the source
- Generating SQL commands to modify the target
- Executing the SQL commands
gsqlcmd allows you to retrieve data from databases, files, and web resources. It also enables the creation and execution of commands for various database servers, including SQL Server, Oracle, MySQL, PostgreSQL, and SQLite.
Thus, you can synchronize data across any supported databases. You can find the source code in the Examples\Sync Databases
folder of the downloaded package. Additionally, you can test any batch file with a demo sample hosted online.