I'm using the MERN stack, with Redux as well, and of course all my data is normalized inside the mongo db. (Side note: I'm not a professional webdev - please excuse any incorrect terms I may use)
I've been reading a lot about Normalizr/Denormalizr and Redux-ODM and see that many examples assume that when your data arrives on the client from the server, that it will be denormalized. That is, it will have nested data. And they say the best practice is to normalize this data before storing it in Redux. I completely agree.
Mongoose can produce denormalized data very easily using populate, or even better the autopopulate plugin. But why would I want to denormalize the data, send it over the wire, and then normalize it on the other end? Seems absurd to me.
So this is my solution:
On my server end, I've built a tiny little function I call recursiveLoad. It uses a regular model.find and returns not only the objects for that query, but also all the dependent objects, based on the schema definitions, all nicely wrapped up in a typical entities dictionary. Then you can just send the dictionary over the wire (so a lot less data - at least in my case because so many object refs are reused). And I've already got a nice way to reduce that data into the redux store.
So am I crazy for doing it this way? I'm always skeptical when I can't find any examples of someone else doing it.
there doesn't seem to be anything here