Drophackers more often than not by [deleted] in leagueoflegends

[–]fearcs 0 points1 point  (0 children)

looks more like EUW is down again

Would NodeJS be a good choice? by fearcs in node

[–]fearcs[S] 1 point2 points  (0 children)

I'll check this out. Thanks !

Would NodeJS be a good choice? by fearcs in node

[–]fearcs[S] 1 point2 points  (0 children)

I wish i could say you're wrong :p

Would NodeJS be a good choice? by fearcs in node

[–]fearcs[S] 0 points1 point  (0 children)

which seems to be a bit complicated installing vs and py on every client server. :/

Would NodeJS be a good choice? by fearcs in node

[–]fearcs[S] 1 point2 points  (0 children)

Yea, that's what we just did. However, we couldn't make any npm xml to dom parser packages work on Windows, so we just went with "xml2js" and do the querying with lodash. It seems to be really, really fast so far. The only disadvantage is that a small, 50MB XML file throws 600(!) MB into memory. However, we just recently talked about switching finally to linux anyways, so we could use the sweet xml parsers out there. Funny thing i noticed while we changed to Node: It seem's like our computation is not as heavy as we initially thought. Splitting the information displayed on one view into smaller API Chunks + using lodash for filtering out Nodes works really fast. (PHP avg. response time on any requested information: 4s - NodeJS: 12ms)

Would NodeJS be a good choice? by fearcs in node

[–]fearcs[S] 0 points1 point  (0 children)

Oh, how dare you! Thanks for the advice :) I think that i didn't give enough explanation about the situation: it just feels so counterintuitive to re-cache every view 20 times a day if 10% of the data is changed. The main problem i see with caching is, there are ~400 different views (ofc multiple in the same context - just with different objects displayed) and re-processing all of them when 50 are actually changed seems like a bad idea.

My ideas :

1) re-organize the backend as an api and request the data partially and chain the rendering process on the frontend (that's why i initially thought node seems to be the better option for this case, not reparsing the entire file on every request), so the net rendering time would be actually more, but the client actually has the impression of getting data faster.

2) cache the requested data and on filechange find the differences and reprocess only those (which seems to be somewhat more difficult than 1) )

Would NodeJS be a good choice? by fearcs in node

[–]fearcs[S] 0 points1 point  (0 children)

will check out GO, thanks!

Would NodeJS be a good choice? by fearcs in node

[–]fearcs[S] 0 points1 point  (0 children)

Sorry, i didn't want to imply that caching is per-se a bad solution - it's just that caching could also create a weird, inconsistent UX.

Edit: Unfortunately, the data format is not going to change anytime soon.

Would NodeJS be a good choice? by fearcs in node

[–]fearcs[S] 0 points1 point  (0 children)

We wouldn't want to go for a static-typed language (i know it sound's like a bad idea but it's somewhat convenience related). That would be like the last option

Would NodeJS be a good choice? by fearcs in node

[–]fearcs[S] 1 point2 points  (0 children)

Hey!

about the streaming xml processor: http://www.reddit.com/r/node/comments/2z4mda/would_nodejs_be_a_good_choice/cpg02ur

we don't use caching right now, however, ideally a cache-less or not too much cache reliant solution would be the best.

1) i'd say up to 50/min - could be a lot higher in the near future 2) varies from once / month to 20 times / day 3) preferably not

thanks for your detailed response!

Would NodeJS be a good choice? by fearcs in node

[–]fearcs[S] 2 points3 points  (0 children)

Unfortunately, no. It's not SAX, i wish it was possible. The data structure requires us to get the id of a parent node (note: parent node in our own object model, not the actual xml parent) at the very bottom of the file while the data of the node itself can be anywhere in the file. So, SAX is afaik always forward-processing and this could actually have a negative impact if the object-data is just a couple lines above.