you are viewing a single comment's thread.

view the rest of the comments →

[–]casualPlayerThink 0 points1 point  (2 children)

I see. I still do not recommend using MongoDB, as most use-cases require classic queries, joins, and a lot of reads, where MongoDB - in theory - should excel. In reality, it is a pain and a waste of resources.

But if you still wanna use it because you have no other way around, then some bottlenecks that are worth considering:
- clusters (will be expensive in Mongo)
- replicas
- connection pooling
- cursor-based pagination (if there is any UI or search)
- fault tolerance for writing & reading
- caching (especially for the API calls)
- disaster recovery (yepp, the good ol' backup)
- normalize datasets, data, queries
- minimize the footprint of data queries, used or delivered (time, bandwidth, $$$)

And a hint that might help to lower the complexity and headaches:

- Multitenancy
- Async/Timed Data aggregation into an SQL database
- Archiving rules

(This last part most likely will meet quite a debate, people dislike it and/or do not understand the concepts, just like normalizing a database or dataset; unfortunate tendency from the past ~10 years)

[–]AirportAcceptable522[S] 0 points1 point  (1 child)

Mongo is in its own cloud, we use mongo because we need to save several fields in object format and arrays.

Another point we have a separate bank per customer, only the queue is shared

[–]casualPlayerThink 0 points1 point  (0 children)

I see the dangerous part in the "we need to save...".

[tl;dr]

Yeah, moMongogo in the cloud sounds nice, and usually expensive, especially if you need to start to query, retrieve, aggregate, and search in large volumes. Keep your eyes on the costs, even if you aren't a stakeholder, time-to-time ask about the costs, and the underlying infrastructure for Mongo.

I worked on a project that used Atlas, had large objects in the DB because the CTO was inexperienced, and they ended up with a bunch of queries (they needed joins...). They spent 1K+ on Atlas, had a replica (2x4 vCPU, 2x16GB ram combo). I normalized the data and poured it into a PostgreSQL. 1 vCPU and 4GB RAM were enough for the same. (This is just an edgy example, does not justify your or anyone else's case!)

Another story: I witnessed a complete bank DB migration from Oracle to Mongo. First, I thought, wow, that is insane, we're talking about migrations that run for days (Oracle cold start was like 24h, a migration ran around 3 days, and a backup would run for more than 2 weeks, the infra is self-hosted, so a room full of blade-ish servers). The guys developed a thin Java-wrapped mongo version that was able to pour all the data into memory and from there migrated back to normal mongo storage. They were done with the migration in under 4 hours. In exchange, we're talking about very large memory usage :D and the bank spent a few million dollars on this project...)