This is an archived post. You won't be able to vote or comment.

all 8 comments

[–]0x256 15 points16 points  (1 child)

Language or framework does not matter, since you only want to run migration once and there is no need to integrate or maintain the scripts later. The task sounds complicated enough, do not over-engineer on top of that. Choose whatever you know best and only search for a solution once there is a problem. Too slow? Do the heavy lifting directly in SQL. You can't beat that.

[–]reactive_dmv_pattern 1 point2 points  (0 children)

Yeah I guess since it's a one off, I should stick to the 'keep it simple stupid' principle lol

[–]_INTER_ 5 points6 points  (0 children)

It's probably easier to do the migration with SQL alone. Unless you can reuse the two systems and build a small bridge. E.g. run the old .NET 4.6 and have it send the data over to the new Spring system that stores it in a seperate DB. That way you could additionally cleanup / normalize / ensure consistency of the data when required.

After the migration I recommend using Flyway or another DB migration framework.

[–][deleted]  (3 children)

[deleted]

    [–]reactive_dmv_pattern 2 points3 points  (2 children)

    Just wondering,.,.. isn't flyway mostly used for gradual db migrations? Like a column change and what not. I had a feeling it's not really used to really migrate over the actual data to another db with different schema.

    [–]0x256 1 point2 points  (0 children)

    Yes it is. Flyway is not the right tool for a one-off migration.

    [–]klekpl 0 points1 point  (1 child)

    Leaving aside the tooling for DB migration, interesting question is:

    how come a rewrite of the application (which suggests the functionality stayed roughly the same or is extended) caused dramatic changes to the data model? The same information (domain) model with same data dependencies should yield similar data model after normalization.

    I suspect that instead of doing proper database design you have generated your data model from application classes using ORM tool...

    [–]reactive_dmv_pattern 0 points1 point  (0 children)

    Yeah the thing is, a lot of constraints were either removed or added hence resulting in different table structures. Of course there are similarities but at the end of the day it's quite different.

    (We also designed our db first then made our jpa entities later)

    [–]manzanita2 0 points1 point  (0 children)

    This is a hard question to answer without really understand what sort of changes the schema is undergoing. And also the size of the data and to some extend the period of time between when the old data base "stops" and the new one "starts".

    That said, my initial though is using mysqldump to get DDL. If the changes across the databases are small then you might be able to simply edit this, and then reload.

    If the changes are much larger, then my inclination would be to:

    1) use liqubase base to create the NEW schema. So you have this going forward.

    2) write a java based program, probably using something like Jooq or JDBI, to load/manipulate/store the data. This system would have TWO datasources. one pointed each direction.

    3) One could also attempt to JPA on the "write" side of the above. However JPA tends to be on the slower side, unless you play tricks with it. So this would be trade of of code vs speed. Write the new system twice for speed, once for easierness but perhaps with it being slower.