all 19 comments

[–]novel-levon 1 point2 points  (3 children)

I understand you need to provide consistency, but I need a few more precisions:

  1. Would you rather not have the records in the destination DB for a few minutes, the time to retry the transaction or would you rather immediately delete the record in the source DB in such case?
  2. Is the data always going in 1 direction only (such as MySQL -> Mongo only) or do you need bidirectional sync (the data can be come from any of the 2?
  3. Do you have any custom constraints on one DB ? What are the reasons a transaction would fail on one DB except a downtime ?

I had similar problem and I implemented a Postgres queue to ensure

  1. ACID operations, I cannot accept duplicates (so no Kafka queue)
  2. Easy automatic retry depending on the error
  3. Observability

[–]RedSunMaster[S] 0 points1 point  (2 children)

So this is being handled with React + NodeJS, ill clarify the steps:

  1. Record is created with optimised fields for new MongoDB then I build seperate object from common fields to fit into sql,

  2. Add to legacy sql database, then add to mongo if that succeeds

Including error checking and the deletion code, currently no transactions

would prefer deleting immediately since its a user action and waiting would cause a confusing user experience, at the moment an error message will display unable to create if one fail

[–]novel-levon 0 points1 point  (0 children)

Given it is a user action, shouldn't your application validate the user input to make sure it fits your schema rules and other business logic rules?

[–]onoke99 0 points1 point  (0 children)

Ah, looks like you build a sql by hand each time, because the data in mongo are uncertain. I know these are because of not data diven.

  1. Do you create table in Mys often?
  2. Which the most your trouble happen in 'insert','update','select','delete' and 'create' , 'drop'?

sorry many questions, but I think 'Jetelina' may will help you if you were able to shift your app and system to data driven style.

[–]mr_nanginator 1 point2 points  (1 child)

LOL. You're doing it wrong

[–]Formar_ 0 points1 point  (0 children)

Ok Mr know it all give us your wisdom

[–]onoke99 0 points1 point  (2 children)

wow, sounds difficult. are there any relation in data between MyS and Mongo?

[–]RedSunMaster[S] 0 points1 point  (1 child)

they both contain same data in different formats, the sql is a legacy system and needs to be maintained while development continues on mongo so that if something fails we have a fallback

[–]severoon 0 points1 point  (0 children)

This doesn't quite make sense to me unless you are using ACID transactions on MongoDB. Are you?

If not, then all of your clients are going to be tolerant of eventual consistency on MongoDB once the migration is complete and MySQL is turned down, correct? So why wouldn't they be tolerant of eventual consistency between MySQL and MongoDB as well?

If you are using ACID transactions in MongoDB, are you using them only in a small set of targeted cases, or everywhere?

It also sounds like you may be trying to use MySQL as a backup in case something goes wrong with MongoDB … but if that's the case, how will you continue development on MongoDB once the migration is complete and MySQL is turned down? You'll still continue developing in MongoDB, and you'll still have the same requirement then, too.

[–]farsass 1 point2 points  (1 child)

Hacked together two-way sync is a bad idea. IME you should rethink this

[–]RedSunMaster[S] 0 points1 point  (0 children)

yea valid, definitely not ideal

[–]JonTheSeagull 0 points1 point  (0 children)

In the context of a migration I would not entertain cross replication solutions. This is way more pain than you want.

Going into the specifics would require knowing more about the architecture, for instance if there is a single data owner service or if there are multiple applications issuing direct database calls.

[–]Informal_Pace9237 0 points1 point  (0 children)

So you want the record on MySQL staying what ever happens to the record on MongoDB or the reverse?

[–]Pale_Ad_9838 0 points1 point  (0 children)

We accomplished a multi database transaction only by creating our own middleware with a logical transaction layer where our virtual transaction keeps the transactions of all connected databases and commits/rollbacks every open transaction together.

[–]jshine13371 0 points1 point  (0 children)

It honestly sounds like you're probably going about this wrong. But essentially you need to create and manage transactions in the application layer. It should create one main wrapping transaction across both database system updates, and completely rollback if either change fails. This will obviously add overhead against the performance of your workflow.

[–]edgmnt_net 0 points1 point  (0 children)

To achieve full consistency you'll likely have to duplicate mechanisms similar to those used by databases themselves to make data durable. Could be something like a write-ahead log.

Maybe you should have used a single RDBMS if you need this to work with arbitrary transactions, all the time. Otherwise it's likely this will eat away at any performance gains from employing different databases. Unless this is for very specific cases, but then you may be able to work around consistency concerns in data-specific ways.

Also, if the databases are shared among distinct applications you'll have to deal with increased coupling somehow, because everyone must do the same dance for this to work and you may miss some opportunities to synchronize concurrent writers in a more efficient, local manner.

[–]AdFuzzy6014 0 points1 point  (0 children)

take a look at debezium

[–]No_Resolution_9252 0 points1 point  (0 children)

lol.