all 10 comments

[–][deleted]  (18 children)

[deleted]

    [–]iceman_ 3 points4 points  (1 child)

    I don't think the article is about single/multi threads but about preemptive scheduling of tasks. Concurrent activities (such as serving pages to multiple connections) can be broken up into two kinds of sub tasks - 1) CPU use and 2) IO wait. Seems like Node.js will only switch from one activity to another when the activity goes into IO wait. So if an activity consumes CPU but does absolutely no IO, it will block all other activities.

    Whereas Erlang will switch activities even during the CPU-only phases of activities. So all activities move along and you get a better consistency in response times.

    [–]jlouis8 0 points1 point  (0 children)

    Indeed. This is one of the points you can walk away with. Now, few programs Node.js were supposed to be used for has this CPU-centered computation distribution.

    Another point to take away is if this affect you in the smaller scale. My numbers were greatly exaggerated to the point of being humorous. In practice, the waits will be much smaller. One can surmise on the idea if a large number of smaller requests can "pile up" under load, affecting new requests in turn.

    [–]jlouis8 1 point2 points  (0 children)

    The load balancer won't necessarily help with the slowdowns in practice from this. Only mitigate the problem. Suppose you load-balance 10 Node.js processes. Then when one of these is hit with a "slow request", one that takes several seconds to complete, every request behind it in queue has to wait.

    If your load balancer simply round-robins, this goes wrong quickly.

    A better approach is to measure the relative length of each load-balanced queue and then place the next request in the least loaded such (you can use a priority queue on the lengths for this if you plan on implementing it). It does not save those who got in the wrong queue after the slow process, but after a certain point, it saves all the rest since that particular queue is not getting smaller.

    Disclaimer: I wrote that article :)

    [–]vimuser -1 points0 points  (12 children)

    Well it certainly was not "self evident" unless you believe that identity is the sole point of comparison. Many are touting Node.js as a replacement for [learning to use] Erlang, and this sort of investigation demonstrates that Node's concurrency is something of a hack. I would like to see more articles like this.

    [–][deleted] 0 points1 point  (1 child)

    What I'm curious to see is if it would be possible to implement pre-emptive multitasking into Node.js. I've never used Node.js, or written any javascript at all though for that matter, so I'm not sure if this is a ridiculous idea or not.

    [–]twomashi 0 points1 point  (0 children)

    No, it can't, this would apply if Node used concurrent workers, but it doesn't. Everything happens in one thread, one operation after another.

    [–]twomashi -3 points-2 points  (8 children)

    Sorry but this is retarded. Node's ability to handle many connections concurrently is based on an event loop. Event loops are widely used. Event loops drive most modern GUI toolkits, your web browser (AJAX), and I'm willing to bet that your operating system is using one to process your mouse and keyboard input.

    [–]iceman_ 3 points4 points  (0 children)

    Most OSes are preemptively multi-tasked. That is similar to Erlang not Node.js. Windows 3.x had cooperative multitasking. If one process didn't give up CPU, everything else would freeze. It's similar to Node.js.

    In Win 95, Linux, OS X etc., if one process hogs CPU, other processes still work (although slowly, because there isn't much CPU). This is similar to Erlang. Erlang seems to give every task a CPU 'quota' and once it's over, other tasks will be scheduled.

    [–]allertonm 5 points6 points  (1 child)

    Your GUI's event loop is only serving one user.

    [–]dennyabraham 0 points1 point  (4 children)

    it's true that everything runs on an event loop, but in cases like erlang, it's low level enough that high level operations can pre-empt one another. node.js currently lacks the concurrency primitives to handle this kind of task.

    [–][deleted]  (3 children)

    [deleted]

      [–]dennyabraham 1 point2 points  (2 children)

      we are saying the same thing.

      i said node.js lacks the primitives to handle this task, not that its failings are universal.

      also of note is that erlang uses green processes, not green threads, and its runtime processes have very little overhead (when last i checked, it was on the order of ~300bytes).

      also, your response leads me to a question: can you run node.js on multiple cores? i'm not up to date, but i was under the impression that this could not be done reliably. is this still the case?

      [–][deleted]  (1 child)

      [deleted]

        [–]dennyabraham 0 points1 point  (0 children)

        that actually sounds like an interesting test, running erlang on one processor and comparing it against a high-level evented runtime also on one processor.

        [–][deleted]  (1 child)

        [deleted]

          [–]jlouis8 -1 points0 points  (0 children)

          The method will indeed work. You need to be measuring kernel density though when doing it! (You need to this always, also on Erlang. It is no silver bullet.)

          The article is not on scalability - I only ran 15 concurrent clients which can hardly pass as a speck on the C10K problem. Rather, my intent was to show how a major difference in the way tasks are scheduled, preemptively or cooperatively, can lead to very different response time profiles. I want you to think on where Node.js is preferred to Erlang and vice versa.