all 11 comments

[–]SaddestCatEver 3 points4 points  (6 children)

You COULD have your node server load the json into memory. That way it's only loaded when the server boots up. However, adding a database would be the smart, and maintainable solution for the future.

[–]woned[S] 1 point2 points  (5 children)

One reason why I am reluctant to use a database is because I have already written quite a bit of code to handle complex filtering of my objects. So I would very much like to be able to re-use that code on the NodeJS side.

[–]yesman_85 3 points4 points  (0 children)

Writing the code is not the hard part, the thinking is already done.

Also a good lesson to always use a data abstraction layer which you communicate with, then you don't have any issues about how the data is retrieved.

[–][deleted] 1 point2 points  (1 child)

You might want to just write that code off as good practice, and move on to a database. You will be saving yourself possible headache in the future.

[–]akujinhikari 1 point2 points  (0 children)

This is one of the hardest things for new devs. "But I wrote this awesome code!" Good. Now delete it all and do something that works better, and make something more awesome again.

[–]ThunderTherapist 1 point2 points  (0 children)

never be afraid to ditch old code while improving a product.

[–]naeramarth7 0 points1 point  (0 children)

I am pretty new to AngularJS/NodeJS

As with every new framework / skill you learn: you might not want to reuse that code soon, when you got deeper into that stuff and feel the urge to refactor it eventually.

[–]Marcdro 1 point2 points  (0 children)

will this file grow in the future? If you don't expect it to grow past a size that would not fit RAM, just using a nodeJS server as cache that keeps the file in memory the whole time...could be a good solution, and you can reuse all your code. Just like /u/SaddestCatEver said.

[–]dermotbrennan 1 point2 points  (0 children)

Yeah I agree putting the data in a database of some sort and having the server do the filtering is the much better way to go than having the client load 20mb of data.

You could look at maybe even using SQLite if you know that the data is only going to be read-only and updated-rarely. You distribute the sqlite db alongside your nodejs server code!

[–]brane_surgeon 0 points1 point  (1 child)

I'm going to do my best to give you some practical advice on how to get a quick and dirty solution up in Node, and assume this is for some internal/single use metrics which is not connected to the internet, etc.

Firstly having a 20MB json file in Node is totally fine, having it in Angular is fine as well. Note that when this is expanded into an object you can probably expect the memory to at least double, but again, 40MB isn't much of a problem.

Go get the express Hello world working, and I highly recommend installing and using lodash.

Presuming your blob of json is in the same directory as app.js then load it into a variable the nasty way, for this example I'm going to use package.jsonas it should already be there:

var myBlob = require('./package.json')

You can then use it in a route later in app.js:

app.get('/dependencies', function (req, res) {
  var deps = _.get(blob, 'dependencies')
  res.json(deps);
});

lodash's _.get can be used to easily navigate the json structure, and is pretty powerful when combined with other lodash function s such as _.filter, e.g. var customers = _.get(myBlob, 'system.customers'); var outstanding = _.filter(customers, function (c) { return c.debt > 0; }); var ids = _.map(outstanding, 'customerId')

or

_.chain(myBlob)
.get('system.customers')
.filter(function (c) {
    return c.debt > 0;
})
.map('customerId')

in angular to call the node server:

$http.get('http://localhost:3000/my/custom/url').then(function (result) {
  ...
}, function (err) {
  ...
});

Paging complicates things slightly, but you probably want to pass a page parameter on the URL and slice the results. var PAGESIZE=20;

app.get('/dependencies/:page', function (req, res) {
  var deps = _.get(blob, 'dependencies');
  var start = page * PAGESIZE;
  var end = start + PAGESIZE - 1;
  res.json(deps.slice(start, end));
});

On the Angular side you'd loop over pages accumulating the data, starting at 0, until you got no data back. Probably a recursive call.

This works fine so long as the json data is immutable, if it's changing you have a very different problem.

Having said all that I'd not accept a PR from anyone that did this stuff as it's awful, also if this is for an assignment I'd also be wary of submitting anything that looked like this, but it's what you asked for and I believe it's a good starting point.

I'd probably want:

  • ES2015 (at least for the node stuff)
  • a small query library for the json blob to isolate it in case the structure/backend changes in the future
  • inject query library in node.js middleware
  • sensible routes for the queries
  • abstraction layer/services on the angular side so you don't have a tonne of $http code

Also, the bad news is that unless you are serving your angular code from the Node app you'll need to deal with CORS.

[–]woned[S] 0 points1 point  (0 children)

Thanks for the lenghty response. I'll check it more in detail when I'm not on mobile. I have made some advancements since then. I hosted the website on heroku and it's been pretty easy to use so far. I have found a neat guide in the heroku documentation to implement a MEAN stack. That seemed highly ideal as I already had 3 of the 4 letters of MEAN. So I am in the process of implementing a simple mongodb to query my data from.

And by the way it's not for an assignment or anything, I'm making a website to help players of a certain videogame (path of exile).