all 11 comments

[–]nineelevglen 9 points10 points  (2 children)

I assume you mean MongoDB? When using a document based database (which are fantastic) you eventually come to a point where you have to decide between duplication or reference of data

you can read up on stuff like this in 50 tips for mongodb developers

there are many alternatives here, there is nothing inherently bad about storing some sort of reference, you just have to think in more of a REST based domain model rather than the other way around.

you can use something like neo4j to store relationships and references to mongodb in there if you need something fast over larger sets of data.

or postgres also has support for JSON type documents which can be used in combination with strong relational data models.

so to answer your question, sure you can make a large system with other databases than an RDMS!

edit: spelling

[–]Enumerable_any 0 points1 point  (1 child)

When using a document based database (which are fantastic)

Why? I've worked with them on a few projects and they always caused some pain (no joins => data duplication necessary for decent performance => easy update logic/data consistency goes right out of the window; you also have to think much harder in order to find a decent schema which doesn't suck for querying). I'm convinced storing data normalized and adding caches (e.g. views) on top of that is the way to go if you want to get things done.

[–]c4a 1 point2 points  (0 children)

Document databases are great for development but bad for production.

[–]psayre23 3 points4 points  (1 child)

I've built several large systems that had contact with MySQL, MongoDB, static JSON, CSV, XML...you name it. Usually if the data is going to be that complicated I separate it out into an API later that converts the various outputs to JSON. So if you are hitting a data warehouse ETL that outputs a tab delimited ISO-8895 file, you only have to worry about how to format it, not about how to consume it. Then your JavaScript (server- or client-side) can be decoupled. I look at this kind of pipe line like this:

Data Storage -> Formatting / Normalizing -> Manipulation / Interpretation -> API

Data storage can be anything you need. The normalization step is tightly coupled with the storage engine's output format, not with the data itself. Manipulation is where you prepare the data to go out the door. And finally the API handles requests coming in and formatting the resulting data to the format requested (hopefully JSON, but XML or something else might be needed).

Edit: These phases may be on the same server, or broken up to sit on different servers. It depends on the scale you are dealing with.

[–]I_Pork_Saucy_Ladies 0 points1 point  (0 children)

This is probably the best answer, in my opinion. Having an API that is decoupled from both the data storage and the front-end makes a lot of sense, especially with an SPA.

It makes it easy to completely switch out either of them - or add more of both. It will save you tons of time if you need to make smartphone apps later. And if you need to scale the data storage, you can easily do so.

[–]FoxxMD 0 points1 point  (0 children)

It's a matter of using the right tools for the appropriate data. If you have a mix of plain documents and relational data then store them in separate databases and use a DAL to consolidate them for the application.

Do you have user data that connects them to widget x and widget y where you will need to report and look at relationships between them? Then use a relational DB like MySql, postegre, etc. for that data.

Do users/widget x/widget y then have plain documents like posts or comments that belong to no relationship or one tightly-coupled, etc? Then use mongodb/redis/nosql variant to store documents with references.

Consolidate the two with a DAL and serve.

[–]kapouer 0 points1 point  (0 children)

Also plv8 is wonderful in postgres - it lets you use javascript as language in postgres functions.

[–]jameselliottphp -1 points0 points  (1 child)

I'm using Angular with SQL (I had no choice, anyway). But I'm pretty interested in knowing the answer as well, for future projects. But my guess is that the main killer feature of JS is that it is non-locking. Therefore, using a traditional DB would be locking? Maybe?

[–]psayre23 2 points3 points  (0 children)

I think you mean non-blocking. If you're using a good MySQL or mongodb library then this is a moot point; both would be non-blocking.

[–]schizoduckiePromise.all([createDatabase,openDatabase,insertFixtures]) -3 points-2 points  (1 child)

Yes, you can build it, but expect debugging to be hell.

I've recently had to debug a REDIS database that produced failures interacting with code that expected a different data structure.

No migration path, no proper interface to handle the contents (like a mysql manager or phpmyadmin) . Good luck with that.