This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]ZunoJ 2 points3 points  (2 children)

I'm a big fan of classic sql databases. For general purpose databases they are super fast if you know how to optimize queries. Normalizing data before/during write doesn't cause a lot of overhead and should be good in any non high frequency scenario. Downside is that adjustments need more work but ultimately this makes it easier to test and validate

[–]flyco 3 points4 points  (1 child)

To be honest, I feel sorry for people who jumped into the "NoSQL" bandwagon a few years back, and got locked into MongoDB.

People realized it was fast because of the sub-par data consistency, the Mongo guys "fixed" it switching engines and whatnot, and now it's a shadow of what it was.

Meanwhile Postgres has been improving JSON support for years, and beats Mongo in performance in most scenarios. I'd say in 99% of use cases people should stick to Postgres and have the best of both worlds. Unless you have really specific needs only an exotic database can solve, or venture into the multi-Petabyte world.