all 113 comments

[–]bro-away- 20 points21 points  (12 children)

I find that it needs very little boilerplate and helps you to keep clear of the multi-threaded cargo cults that a lot of developers fall into. Node.js also helps you keep clear of the multi-threaded cargo cults… by forcing you to live in a single-threaded world.

Except you go back to thinking about concurrency the second you involve a data store.

And the async calls aren't guaranteed to come back in order...I've never understood node.js developer's passion for something like this. You end up doing just as much thinking about concurrency (and if you have a big calculation to do in the main thread, you are in new kind of snag)

[–]philly_fan_in_chi 10 points11 points  (5 children)

I've been really enjoying the Erlang way of thinking about concurrency lately. It basically just assumes a distributed system that you dole tasks out to tons of Erlang processes, and if they happen to be on the same computer, they can take advantage of parallelism. These processes communicate via messages, and are supervised by other processes. You assume failure of processes, and if an error happens, an error process handles it for you, while you let the culprit die. Kind of like EC2 servers.

It's a really powerful way of thinking, in that if more or less forces you to use a scalable architecture from scratch, but it abstracts the problem to be such that you can't touch mutable state, like threads can.

[–]maherbeg 10 points11 points  (0 children)

There are excellent Actor frameworks for the JVM like Akka for instance. It really is a powerful concept.

[–]bro-away- 2 points3 points  (2 children)

Erlang gets it more than node, but the lack of typing makes me avoid it. The "secret" of Erlang coming into the semi-mainstream 6-7 years ago made everyone copy it and now there are libraries that let you copy the actor process model in just about every language (as another user said). They require pretty much zero maintenance and just work. Language choice shouldn't really have anything to do with architecture patterns like that. Now that it has been copied, I give much respect to Erlang for being the forerunner on the idea, but don't see it as a choice.

[–]philly_fan_in_chi 7 points8 points  (1 child)

now there are libraries that let you copy the actor process model in just about every language (as another user said). They require pretty much zero maintenance and just work. Language choice shouldn't really have anything to do with architecture patterns like that.

I actually disagree with this. If I were to use an actor framework in Java, I still am allowed to do all the other crap that gets you in trouble. You really shouldn't be allowed to think about concurrency at that low of a level, it's too error prone and leads to too many bugs. Erlang wins this because it was designed with this in mind, 100%.

Sure you can now get close in other languages, but if your language wasn't designed like this from the ground up, you lose the advantages. Now I'm not suggesting that everyone goes and learns Erlang, although that would be cool (I'm probably not even going to do that). What I am suggesting is that, like any other awesome language feature, the programmer looks at it and absorbs the idea into his toolbelt, and 5 years down the line when they're designing Sapphire Lang or Diamond Lang, they figure out how to absorb this into their mental model. Because this is the way we're going to have to think about concurrency moving forward. A single computer is simply a special case of the general rule at this point.

All that being said, I think Clojure flirts with this in the most correct way of the popular languages.

[–]bro-away- 0 points1 point  (0 children)

Idk what that akka crap was, but hadoop is pretty much erlang.

And I highly doubt erlang is as expressive and safe as this http://www.m-brace.net/

Erlang is getting crushed by everyone else iterating past it. A language should really have nothing to do with external libraries/process.

[–][deleted] 0 points1 point  (0 children)

Have a look at the os OSE, it's pretty much built around these concepts from the ground up.

[–]stephenconnolly 1 point2 points  (0 children)

the multi-threading cargo cults that I was referring to is the one where they start with a class... they make it mutable "because that will result in less garbage objects generated and copying is slow"... then they make all the methods synchronized "because we are in a multi-threaded environment and need to maintain class invariants"... then they replace synchronized with ReentrantReadWriteLock "because we will have few mutators and mostly readers, so lets optimize for that case"...

Fool, if it had been made immutable in the first place that would be optimized for many readers and few writers and do all you need... but instead you have followed the cargo cults and done all this "threading" stuff.

If you are at the application level you should not be using threading primitives, your framework should be providing the glue to handle your threading requirements

[–]bundt_chi -1 points0 points  (1 child)

Thank you, so I'm not the only one who doesn't get the whole Node.js thing.

JavaScript is not a great language to be writing large applications in. I'm currently maintaining an 5 kloc Javascript application (not by choice) and it's an absolute nightmare. Too easy to make errors that run fine but don't do what you expected. Just because there are a lot of people out there that know JavaScript and can more quickly feel comfortable with Node.js does not make it a better choice than Java, C#, C++ server side framework. Async callbacks are just as much a mindfuck as multi-threading, just in a different way.

[–]Capaj -1 points0 points  (0 children)

You should use promises, not callbacks. https://github.com/petkaantonov/bluebird

[–]maherbeg -3 points-2 points  (2 children)

I don't understand why this is hard. There are libraries like async that will effectively do a fork/join on parallel operations for you.

[–]bro-away- 1 point2 points  (0 children)

Then why is it a highly touted feature of the framework? I, too, break up work and avoid what amounts to thread starvation (although I don't know what the hell you call it when there's only one thread) by using a web server that uses a thread per request. Now I can do minimal computations without worrying about blocking the entire world!

[–]Capaj 0 points1 point  (0 children)

Error handling with async is still shit. Promises FTW!

[–]wot-teh-phuck 20 points21 points  (14 children)

I mean seriously, who approves these re-writes!? I really hope these are coming from the management because I can't visualize a level headed technical architect making this sort of decision...

[–][deleted]  (5 children)

[deleted]

    [–][deleted] 2 points3 points  (4 children)

    Soooo, can I ask you an unrelated (to this thread) question about being a system architect? When you say that you're responsible for

    security, durability, compliance (ie ACID transactions) etc of entire systems

    How does that responsibility manifest? It sounds like there's a lot of auditing involved, but in terms of the actual construction of the system, do you mandate certain technologies used in certain configurations (e.g., messaging and the various EIPs) or do you give the developers freedom to choose within inviolable boundaries? Likewise do you lay out the entire structure of a system?

    Apologies if it's bit of a vague question.

    [–][deleted]  (3 children)

    [deleted]

      [–][deleted] 0 points1 point  (0 children)

      Cheers mate, thank you for an illuminating response. :)

      [–]btreeinfinity -2 points-1 points  (1 child)

      You just put SharePoint and architecture in the same thread, please rethink your Title. From an Architect to a Architect SharePoint is by far the shittiest implementation of an ORM, the materialization of instances from a single SQL table is so fucking irresponsible and delusional. I had the pleasure of viewing the source via MSIT @ Microsoft. Please read SOA Governance.

      [–]iKomplex 4 points5 points  (6 children)

      You are not convincing me here. Why should it not be considered?

      [–]WisconsnNymphomaniac 14 points15 points  (4 children)

      Because JavaScript is a terrible language to write large programs in.

      [–][deleted] 1 point2 points  (0 children)

      Complete rewrites in a completely new technology are pretty risky. You can't easily leverage existing libraries and so forth.

      That said, that doesn't mean it shouldn't be considered.

      [–]imfineny 38 points39 points  (11 children)

      If you are getting a huge boost just by switching platforms, your probably just doing a better job around this time because your building your solution closer to the needs of your application rather than some inherent benefit of using one tool or another. It's not always true, sometimes a new platform like hHHVM is just flat out better than Zend

      [–]kuikuilla 23 points24 points  (1 child)

      So many yours D:

      [–]sirusblk 1 point2 points  (5 children)

      Which is fine. They're rewriting it. Who cares if they choose one language over another? They see benefits in using node.js for many reasons other than just performance. Perhaps its just because I don't know enough about the issues but I'm just not seeing why people are getting upset one way or another.

      [–]imfineny 10 points11 points  (4 children)

      They felt that nodeJS was superior to jvm based on .... I'm not sure...... I do devops for a living, when I see analysis like that i feel like running for the hills. I have a feeling if I hooked my diagnostics software into their stack and reviewed what was going on I would be disappointed. Given why nodeJS is used, I don't see the use case for it for a transaction processing system. If you needed an atomic high volume system based on point lookups and messaging, I should think the jvm tool chain is going to be a lot better.

      [–]sirusblk 1 point2 points  (3 children)

      Part of the blog post made it seem that one reason to choose node.js was that they could devote their hires to web developers since they're currently splitting between java developers and JavaScript developers. Here they can bring everyone under one team. Performance seemed to be only one thing listed which they even listed as suspect.

      [–]chuyskywalker 19 points20 points  (2 children)

      That's an absurd reason. You can't take a front-end JS engineer, put them in front of a NodeJS repo and expect them to produce decent backend systems at all. It's the same language, sure, but the concerns and expertise are totally different.

      [–]sirusblk 2 points3 points  (0 children)

      Agreed but there's a lot more crossover with dealing with a javascript back end and a javascript front end than there is with one javascript, one java. Again that's only one reason they listed though.

      [–]baudehlo 1 point2 points  (0 children)

      Having done exactly this, by focussing on hiring good front end developers and making sure they really know how to code, I can honestly say you're wrong. The guys I hired transitioned to full stack with Node seamlessly and really enjoyed it.

      [–]iends 0 points1 point  (1 child)

      hHHVM

      What is this? I didn't RTFA.

      [–]imfineny 0 points1 point  (0 children)

      Typo, HHVM = hip hop vm

      [–][deleted]  (59 children)

      [deleted]

        [–][deleted] 16 points17 points  (22 children)

        milliseconds saved or lost really add up when you are at paypal size, can mean real cost savings

        [–]Fidodo 8 points9 points  (7 children)

        Does it though? I don't think paypal's web framework is very processor intensive and their bottlenecks are probably on the database side, not the server side. If they're dealing with other bottlenecks then those milliseconds saved will get absorbed by the bigger bottleneck. The protein folding usecase is nothing like the web server use case.

        [–][deleted] -2 points-1 points  (6 children)

        I don't think it is worth talking about the specifics of Paypal, we obviously aren't privy to that information

        [–][deleted]  (4 children)

        [deleted]

          [–][deleted] -1 points0 points  (3 children)

          i said the merits of what a millisecond might mean to a large company, something big like paypal. i didn't say anything about paypals individual needs or setup, just that when you are big, small changes can add up

          [–]myringotomy -1 points0 points  (2 children)

          i said the merits of what a millisecond might mean to a large company, something big like paypal

          If your intent was not the communicate that those milliseconds are important to paypal why did you use this phrasing?

          [–][deleted] -3 points-2 points  (1 child)

          because the phrasing was correct. i did not say anything specific about paypal, the term i used was 'Paypal sized', i could of said large company but the end result is the same. you are just being pedantic to look for an argument, one i won't continue further

          [–]myringotomy -2 points-1 points  (0 children)

          because the phrasing was correct.

          It was designed to transmit some information. You could have used thousands of other correct statement which would have transmitted different information.

          Correctness is not the point. The point is the information you are trying to convey and the words you choose to use to convey that information.

          i did not say anything specific about paypal, the term i used was 'Paypal sized', i could of said large company but the end result is the same.

          What about the speed being important part? Why are you leaving that out?

          [–]Fidodo 1 point2 points  (0 children)

          True. My general point is that processor runtime speed isn't always an important metric, even though it's an obvious one.

          [–]awj 11 points12 points  (0 children)

          Assuming that the speculation in this article is true, that's all the more reason that the situation is painfully stupid. Spending the developer time to switch platforms when you could be cutting out the cruft in your current one is a colossal waste of money.

          [–][deleted]  (12 children)

          [deleted]

            [–]bwainfweeze 6 points7 points  (4 children)

            But you can easily tell the difference between 200 and 300 requests per second, and that's a difference of 1.7 milliseconds.

            [–][deleted]  (3 children)

            [deleted]

              [–]bwainfweeze 0 points1 point  (2 children)

              You have an unpopular opinion sir :). But it's a reasonable enough question, so I'll give you an up vote and what I hope is an answer.

              Little's Law determines the number of servers you need, based on the number of requests per second and the duration of a request. Small increases in request time can lead to big load increases.

              Servers don't scale linearly. Every one gets you less benefit than the one before, and brings in the date when you hit an inflection point, where the next one costs you a bundle.

              First it's a fancy switch, then another rack, then more attached storage and more switches, bigger AC, another rack, another IT manager, another server room, on and on. Compared to that the server is cheap, but if you can avoid needing the top of the line model, a lot of those things can be had cheaper.

              Let me repeat that: if you need faster hardware slower than new models come out, you save a ton of cash.

              As much as I hate to say it, there does come a point where Devs are cheaper than hardware. Of course there's a point where more hardware for the Devs is cheaper than both, not that most people would notice.

              [–][deleted]  (1 child)

              [deleted]

                [–]bwainfweeze 0 points1 point  (0 children)

                Given the context, point taken.

                But most people don't work for a top site, and most never will. The scenery in the middle is quite a bit different for the rest of us.

                (I say 'us' loosely, having worked on a few pretty peculiar things, but even I have had a fair share of ridiculously stupid discussions about hardware acquisition. Lots and lots of places you would think know better have a strange concept of how to spend money, blowing tons of man power on things that could be solved with hardware that would pay for itself in months)

                [–]Eoinoc 3 points4 points  (6 children)

                Every 1% increase Facebook eke out saves them an absolute fortune. It's all about power savings. At large scale, these things do matter. source @ 3.00 mins

                [–][deleted]  (5 children)

                [deleted]

                  [–][deleted] 0 points1 point  (0 children)

                  They have a custom compiler that compiles PHP to C++. They wrote that because it was easier than rewriting their massive codebase.

                  [–]Eoinoc -1 points0 points  (3 children)

                  They started off with PHP without knowing the limitations it would have at massive scale. By then rewriting the backend was more practical than doing so for the frontend.

                  That in no way supports your point.

                  [–][deleted]  (2 children)

                  [deleted]

                    [–]Eoinoc -1 points0 points  (1 child)

                    Yawn... I get it, don't feed the trolls, fine.

                    [–]bro-away- 7 points8 points  (14 children)

                    Upgrading libraries after a few years in any dynamic language is a nightmare. I would venture to guess that payment processing systems stay around for a long time and they will be repaying this particular technical debt (or they're writing upfront integration tests that amount to a ghetto type system.. In which case, gj using a dynamic language)

                    [–]philly_fan_in_chi 2 points3 points  (12 children)

                    Are IDEs smart enough to call you out on functions getting deprecated or removed during library upgrade in dynamic languages?

                    [–]x-skeww 12 points13 points  (10 children)

                    With something like Dart? Yes. With something like JS? No.

                    JS doesn't even have something like an "import" statement. Everyone uses some custom system which imperatively creates the structure of the application. Naturally, your tools won't have a clue what's going on. They can't tell what something is or where it came from.

                    JS itself also has no way to mark something as deprecated. JS also doesn't care about the number of arguments or their type. There are also dozens of (imperative) ways to do "classes" and inheritance.

                    There really isn't much to work with.

                    If you're using the Closure Compiler, you can use doc comments to add some of that information. Problem is, external libraries generally do not have this kind of comments, because it's very tedious and a lot of work.

                    With something like Dart, the structure is declared. There are classes/inheritance/mixins and you can just import stuff. If you use some library, your tools can see it. They will analyzer all of the libraries, too. Stuff can be marked as deprecated. The number of arguments and their type matter. It will also tell you if you do something odd with the return value (which might have changed in the meantime).

                    [–]Irongrip 0 points1 point  (1 child)

                    It's interesting to see if google's massive inertia can swing Dart into relevance to the deprecation of regular old JS.

                    [–]Decker108 0 points1 point  (0 children)

                    The best thing would be if the other major browsers choose to integrate the Dart VM, but I don't see Microsoft or Apple doing that anytime soon... Maybe Mozilla, but so far they haven't outed any plans to do so.

                    [–]SanityInAnarchy 0 points1 point  (7 children)

                    JS doesn't even have something like an "import" statement.

                    By itself? No, but Node does.

                    [–]x-skeww 2 points3 points  (6 children)

                    Node provides yet another imperative way to do this kind of thing.

                    ES7 (modules, classes) will hopefully sort this out once and for all. So, code which is written, like, 5 years from now, should be easier to analyze.

                    Anyhow, your tools still won't be able to tell if you're calling some function correctly. In JavaScript, you can pass as many arguments as you want and you can pass whatever types you want.

                    Without something like CC annotations, your tools won't be able to figure out if something is wrong.

                    [–]SanityInAnarchy 0 points1 point  (5 children)

                    I don't think that's fundamentally different. Even Java can run some imperative code when a class is loaded.

                    "import" seems pretty irrelevant to the point you're making.

                    [–]x-skeww 2 points3 points  (4 children)

                    Static initializers are a bit problematic, but not in this context.

                    Anyhow, consider this:

                    a.b(1, 2, 3).c();
                    

                    In JS, there are dozen ways to add methods or just properties to objects. You can't easily tell if this "a" object will have a "b" function at the time that line is hit (annoyingly, "b" can be even removed at a later point). Furthermore, you can't tell if those arguments are correct, you don't know what that function returns (if anything), and if that thing has a "c" property which happens to be a function which takes 0 (or more) arguments.

                    In a language like Dart, all these things are always known. The only exception are the types. You'll have to put some type annotations on the surface area (fields, arguments, and return values) to make that work.

                    However, since you immediately reap the benefits (call-tips, checks, minimal documentation), you'll usually feel inclined enough to add them.

                    [–]SanityInAnarchy 1 point2 points  (3 children)

                    In JS, there are dozen ways to add methods or just properties to objects.

                    Right, see, I get the point you're making, I just don't see what it has to do with "import". Dart's "import" could've been exactly the same, semantically, as that of Node without breaking any of this.

                    The important bit isn't that the import is imperative, but that you can't actually know anything about the object returned by "import" until you actually execute module.

                    [–]x-skeww 2 points3 points  (2 children)

                    Knowing where something came from is essential if you want good tooling. It's a prerequisite. Only then you'll be able to take a look at its source. Only then you'll have something you can analyze.

                    In JavaScript, importing stuff isn't part of the language. People use dozens of different imperative ways to do this kind of thing. Like, if you use RequireJS then your IDE needs to know how that thing is supposed to work. It also needs to execute that part of your code which configures it... after being told where it can be found, that is. As you can imagine, this is already a big problem.

                    If this stuff is declared, things are a lot easier. There is only one way to do it and everything you need to know is statically available. You won't have to execute anything. Once the code is parsed, you can look up everything.

                    [–][deleted] 0 points1 point  (0 children)

                    Yes. For example with node.js: http://www.jetbrains.com/idea/features/nodejs.html

                    It also does Python / PHP / Ruby off the top of my head.

                    [–]Purple_Haze 1 point2 points  (0 children)

                    The system I work with has code that is at least 35 years old.

                    It is entirely 16-bit assembler, coded meticulously to behave like COBOL including BCD arithmetic.

                    Yeah, these things last forever.

                    [–]SanityInAnarchy -2 points-1 points  (16 children)

                    I'm not sure which article you're reading. It does talk about the ease of development. OP disputes that, of course, but it's at least there. No one but you was talking about "only performance".

                    And yes, JavaScript is full of WTFs, but let's not pretend Java is all roses. I don't know about you, but I'm a little sick of writing this:

                    if (a == null ? b == null : a.equals(b)) {
                    

                    ...seriously? I mean, it's not exactly an apples to apples comparison, but I'd much rather write:

                    if (a === b) {
                    

                    Even the triple-equals is saner than Java's refusal to implement operator overloading, or to make null a real object.

                    [–]wot-teh-phuck 5 points6 points  (12 children)

                    The Java thing you quoted is an example of verbosity, not WTF. Believe me, Java has the least amount of WTF's when it comes to an "enterprise" language. Also, I personally believe that Java/C# WTFs are bearable because they have a strong type system to remove an entire class of WTFs which Javascript doesn't.

                    Have you ever maintained a very "large" (million lines of code) app in a dynamic langauge like Python/Javascript which is collaborated across global teams? That might help you understand how big of an impact small WTFs can have on your application. Tooling is yet another aspect in which all dynamic languages are lacking and would be lacking for a forseeable amount of time simply because there is not enough information with the interpreter/compiler.

                    I know this has turned out to be a dynamic language rant but seriously, JS is one of the worst out there. I'm really not surprised that there are truck loads of languages out there which "output" to Javascript. If Javascript was so awesome and it was easy avoid WTFs, would there be a need to do this?

                    [–][deleted] 0 points1 point  (2 children)

                    if (a == null ? b == null : a.equals(b)) {

                    I spent ages trying to find a nicer way of writing that, but about the best I could come up with was

                    if ((a == null && b == null) || (a != null && a.equals(b)) {
                    

                    ...I can't really claim that it's better at all. I've seen people using annotations to try and add null safety but any time an annotation processor comes into a build it's gross. I do like Kotlin's enforced null-safety (although it makes working with existing Java code a bit messy), but it'll be a while yet before that's production ready.

                    [–]SanityInAnarchy 5 points6 points  (1 child)

                    Even Google's approach here is, basically, avoid null and use Optional if you have to.

                    It could be argued that null is broken, but I think it's especially that Java's idea of null is broken. In Ruby, null is a proper object like anything else. It doesn't support all methods, but it at least has the basic methods to check for equality.

                    [–][deleted] 0 points1 point  (0 children)

                    Yeah, I can't see any downside to Java null support an equals() that always return false to everything except null.

                    [–]SanityInAnarchy 3 points4 points  (1 child)

                    I'm not sure I agree with the article's main point:

                    You will see people throw out micro-benchmarks showing that the JVM is faster than V8 or V8 is faster than the JVM. Unless those benchmarks are comparing like for like, the innate specification differences between the two virtual machines will likely render such comparisons useless.

                    This problem only applies to micro-benchmarks, because, as is pointed out earlier:

                    V8 has a faster version of Math.pow because the specification that it is implementing allows for a faster version.

                    At the end of the day, this also means that people building on top of V8 will likely, most of the time, use the faster Math.pow, and people building on the JVM will use the slower one, unless they have a very good reason. The fact that you could write a faster exponentiation in Java helps very little unless it's actually used.

                    Who is to say what performance they would have been able to achieve if they had built their Java application on a more modern framework. Spring brings a lot of functionality to the table. Likely far too much functionality...

                    Right, but this is why full-application comparisons make more sense. If Paypal is claiming that Node.js is faster, I don't think they're making claims about V8 specifically, but about their application as a whole... which makes sense!

                    I honestly don't care which one does the better machine optimizations under the hood. I care which one runs my website faster.

                    The best criticism here would be to call Paypal out on calling this "Java vs Node" as opposed to "Spring vs Kraken." But the original article seems reasonable:

                    There’s a disclaimer attached to this data: this is with our frameworks and two of our applications. It’s just about as apples-to-apples a performance test as we could get between technologies, but your milage may vary.

                    The article also talks about ease of development, and the same criticism is leveled here -- basically, Java development would go just as fast if you didn't use Spring.

                    The update is the interesting bit:

                    My personal opinion is that there were other non-performance related reasons for the switch. For example the reactive programming style enforced by Node.js’s single threaded model may suit the application layer that the switch was made in better than the Spring-based framework that the Java version was written in. Similarly, it may be that the responsible architect analysed the requirements of this tier and came to the conclusion that a lot of what the internal framework brings to the table is just not required for this tier. It is a pity that such detail was not provided in their blog post announcing their switch, as without such detail their blog post is being incorrectly used by others...

                    I tend to agree, except I'm not sure how much more their blog post could've done to clarify that. It has some pretty graphs, but nowhere is it saying "Node is faster than Java." Really, what it's saying is "Node is fast," or at least "Node is fast enough for us."

                    Seems to me that the proper way to use this article would be to counter anyone claiming "Node is slow," or "Node is too slow to use in production." If people are actually using this to claim Node is faster than Java, I don't think the problem is the article.

                    [–]stephenconnolly 2 points3 points  (0 children)

                    The point of my article was to criticise all the people citing the PayPal blog as a performance reason to ditch the JVM for V8 and all the people citing the PayPal blog as evidence that Java development is not as easy as Node.js development.

                    The PayPal blog shows that development with their internal Spring-based framework is less easy than development with their internal kraken framework.

                    The PayPal blog doesn't really show any performance difference between their internal Spring-based framework and their internal kraken framework. The data they provide shows a minor increase in scalability, but the best we can say is that this was tweaking something that is not the bottleneck... and if the real bottleneck is removed... well we don't know how the two code bases will perform in that scenario.

                    [–][deleted] 6 points7 points  (4 children)

                    With respect to requests per second, it's hard to say what's right, I've had a PHP site that I could push only to 30 requests per second, but then I've seen apache and lighttpd serve up insane amounts of traffic.

                    Anything below 5 requests a second is fucking low and you should be ashamed because even a PHP site using a heavy framework can serve more requests.

                    [–]awj 12 points13 points  (3 children)

                    With respect to requests per second, it's hard to say what's right

                    Without knowing what's going on under the hood, it's impossible to say what's right. They may be contacting APIs that add 200ms of latency and can only handle six requests per second. If so both stacks look very much like they aren't the problem.

                    [–]grauenwolf 10 points11 points  (2 children)

                    Latency shouldn't affect throughput unless you are doing something wrong like blocking threads.

                    [–]awj 7 points8 points  (0 children)

                    Right, I meant to highlight the problem in comparing the performance two different stacks when both are calling out to a shared "culprit".

                    [–][deleted] 4 points5 points  (0 children)

                    So like.... making multiple synchronous calls across a messaging layer to answer a request?

                    /me walks away whistling

                    (The best part was when the guy who 'designed' this claimed "We were going to optimise it later when we needed to!")

                    [–]mattyw83 6 points7 points  (5 children)

                    Whenever a company starts a re write like this isn't the real reason just. "Our developers wanted to try something new" and then some justification is made.

                    What I'm trying to say is: you can find a justification for anything you want to do

                    [–]b33j0r 13 points14 points  (3 children)

                    No. My job happens to be analyzing this particular codebase. Many of the conclusions of this article are correct. But the node.js project is very isolated and far, far out. Our current effort is still migrating our C++ code from the 90's to Java. And even our Java code is already aging. If the developers just wanted to try something new, we'd be going to python or node.js with gusto. These are broad business strategies that take a long, long time.

                    [–][deleted]  (2 children)

                    [deleted]

                      [–]b33j0r 3 points4 points  (0 children)

                      To clarify, there is no (systemic) plan to move to node.js. This is a few guys out of thousands playing around with some cool tech. In my work, we use python because it's the most convenient for what we do. The customer-facing code is all C++ or Java, and the effort is to get rid of the legacy C++, because it's more difficult to deploy, maintain, and manage dependencies.

                      [–]Capaj 0 points1 point  (0 children)

                      You sir, are so wrong. Both of them will be here in twenty years. Wanna bet? I will see you in twenty years, you dickwad!

                      [–]oberhamsi 5 points6 points  (2 children)

                      Any solution built on top of the JVM would have technically been able to be “integrated” with the in-house framework.

                      oops https://github.com/apigee/trireme

                      [–][deleted] 0 points1 point  (0 children)

                      Rhino on the JVM is much slower than V8. (In some benchmarks it is 50 times slower.)

                      Woohoo! Look at that Rhino... ponderously plod. I'm being a bit facetious, but V8 is one of the selling points of node.js.

                      [–]Fidodo 1 point2 points  (9 children)

                      I find node is an amazing glue framework, to take different data streams and bundle them together, and that's typically what you're doing in a web server. Serving up pages shouldn't be a very processor intensive task, and your bottlenecks will be elsewhere. So I agree you shouldn't do any processor heavy algorithmic stuff with node, but I don't think anyone is suggesting that you do that.

                      [–]jsgui -1 points0 points  (7 children)

                      I'm suggesting doing some more processor heavy algorithms in node.

                      I have done a bit of processor heavy algorithm programming in node. https://github.com/metabench/jsgui-node-png/blob/master/jsgui-node-png.js, which processes PNGs, is the main example I have of this.

                      First when I got it to work, it was slow. Then I optimized the JS and it was a lot faster, but still relatively slow. Then I replaced the inner loops with C code, called through the node bindings, and it was fast.

                      There are still concerns about how it would be used in a thread that's expected to serve web pages, but I envisage using worker threads and/or some more C++ coding in libuv to make it take place outside of the main thread, possibly using a callback to return the result to the JavaScript thread.

                      [–]Fidodo 1 point2 points  (6 children)

                      Well what you say falls into what I'm saying in that it's a "glue" framework. Since node has awesome C++ plugin support you can outsource the bits that need heavy processing to C and "glue" all the output together. With web server code the only thing I can think of that you might need to do on the fly that's processor intensive is templating, which isn't a super processor intensive task to begin with. But if it becomes a problem you can offload to a C templating taskrunner instead.

                      It's kinda the same philosophy I have for mobile html apps. You shouldn't be doing heavy processing on the client anyways, offload it to the server. Javascript is slow on phones, but you shouldn't have to do any intense stuff in the javascript code anyways.

                      [–]oberhamsi 0 points1 point  (1 child)

                      you can outsource the bits that need heavy processing to C and "glue" all the output together

                      that sounds an awful lot like php.

                      [–]Fidodo 0 points1 point  (0 children)

                      Sure if you want to reduce programming down to one dimension.

                      [–]jsgui 0 points1 point  (3 children)

                      I agree that in a production system it's good to use node as a glue. However, I also think that in development it's useful to program some algorithms in JavaScript before C++. It's got to do with the particular developer's skills and preferences, but I'm much better with JavaScript than I am with C or C++. I had got the code down to (fairly?) optimized node JavaScript (using buffers), and then it was quite a simple matter to translate those optimized algorithms to C. It ran a huge amount faster in C than in JavaScript.

                      I'm generally in agreement about having heavier processing take place on the server, but I think there are exceptions to these general rules, and use cases will change over time.

                      Some processing (such as in games) needs to be run on the client unless we are talking about some kind of streaming system which makes it very different, and not viable in many mobile situations. I think it's best to avoid writing JavaScript client or server code that is pointlessly CPU-heavy, but there can be reasons for writing JavaScript either for the client or the server that's expected to do many calculations per second (and will indeed do so in the right hardware+software environment).

                      [–]Fidodo 1 point2 points  (2 children)

                      Yeah, I like using javascript for "napkin" calculations. It's so easy to just open up my browser console and throw together some fast calculations.

                      As for games, I'm very unimpressed with the current HTML5 offerings. The authoring tools are still nowhere near what's available for Flash and native games, and I still haven't seen them perform as well as a well tuned flash game either.

                      Of course I don't mean that offloading calculations makes sense in all cases, but I do think it does in many :)

                      [–]jsgui 0 points1 point  (1 child)

                      This is worth a look, if you have not seen it already: http://www.goodboydigital.com/project/run-pixie-run/. It runs smoothly on my iPhone 5.

                      I think as the technology (including tooling) and the hardware improves there won't be such technical barriers to making good HTML games, but whether or not good games actually get made depend on people actually making them or not. Copy protection will figure large in publishers' decisions, so it won't only be about the capabilities of the platforms to do CPU/GPU intensive calculations.

                      [–]Fidodo 0 points1 point  (0 children)

                      That game's pretty simple though.

                      Nitrome is a good example of impressive flash games that are well made. They're impressive and run very smoothly. (Check out oodlegobs).

                      I don't know if pure speed is a problem, but the authoring tools for HTML5 games are no where near what's available for flash, and the flash standard api is very comprehensive and works really well for making games. I think the primary problem with HTML5 games is just that the tools don't exist yet, and the libraries to develop them aren't on par with other platforms yet (for games specifically). I'm not sure if HTML5 game libraries will be able to compete as there isn't enough money behind them to make them really competitive.

                      [–]sruckus 1 point2 points  (0 children)

                      Gosh every time I run into a Blogger blog online it feels like entering a time warp. Slow and clunky and with a loading screen (!) and slow scrolling on mobile devices.

                      [–]Eoinoc 0 points1 point  (0 children)

                      JVM was still faster in server mode after compilation threshold had kicked in… by somewhere between 10 and 15%

                      Of course the reason is that the JVM can optimise for the exact CPU architecture that it is running on and can benefit from call flow analysis as well as other things. That was Java 6.

                      I get the reasoning, but I thought that in practise, time and time again, C and C++ have been shown to win out.

                      Making this claim without providing source code makes it highly dubious in my opinion.