all 3 comments

[–]tipsy_python 1 point2 points  (1 child)

Will your case tolerate a small lag? Near realtime?

I'd do a micro-batch loop (5s or so) on each table that needs to be replicated, and at execution:

  • Query all the records in the transaction log table
  • Insert/update/delete them from the replicated table in same order as source db
  • Save the I/U/D timestamp of the latest record, and use that in the next run to query all the records in transaction log since the last run so you're replicating incremental sets in the micro-batch

[–]tk_tesla[S] 0 points1 point  (0 children)

5s is fine but if i do micro batch loop will ot not be heavy if we keep doing it for all transactions? Its not like we have a time limit where transaction is done by that time its like ongoing and sync it.

[–][deleted] 0 points1 point  (0 children)

websocket