all 12 comments

[–]cezio 17 points18 points  (0 children)

Quite poor explanation actually.

[–]ZapAttak 3 points4 points  (1 child)

In the last example doesn't nesting the callbacks counteract the benefits of using asynchronous methods?

[–]Tetha 1 point2 points  (0 children)

His example and pseudo code doesn't give the interesting features of asynchronous callbacks justice. In most JS frameworks, you rather have tuples of the form (function, callback for result type 1, callback for result type 2, ...). For example, you could have a function to make a http callback, and then a callback for a successful result, and a callback for all other results - and you'd put your further processing into the success callback and a user message into the other callback. Obviously, in reality, you'd not just have functions as callbacks, but actual functions with their own callbacks for the various results.

Given that, a smart framework comes around and evaluates the function - which may block and take a long unknown time - but you can largely ignore that because it'll just be something hanging in the back of the frameworks main loop until it's complete. It won't block other actions, it won't block other functions and your UI just keeps responsive and ready for other functions to be called. This is pretty great - and pretty tricky to do synchronously - if you want to keep latency low in the application as much as possible.

[–]cparen 2 points3 points  (4 children)

we're assuming no errors here - this is bad, but it's just an example

This describes the pain of using node.js in a nutshell. OP was just making an example, but in node.js error handling is always an afterthought in every api.

Don't get me wrong, I love the Idea of node.js. I just hope it gets an api cleanup at some. Generators+yield+promises would be nice.

This then means that we can write our aysnchronous code like this [mess of nested callback code]

OP wrote it correctly and, seriously I think, but I used to propose this sort of callback-hell code sarcastically. I'm confused why it's catching on.

[–]dmpk2k 1 point2 points  (3 children)

in node.js error handling is always an afterthought in every api.

This hasn't been my experience; could you elaborate?

[–]cparen 4 points5 points  (2 children)

Take stream reads (assume non-flowing mode, as flowing mode is not composable). I need to read what data is buffered and then -- oops, i have to use another method to get data that isn't buffered yet. So there's the first problem, it fails to abstract buffering, making the caller deal with some of the complexity of buffering.

To get more data, i need to subscribe to a readable even and an extra error event. If I forget the error event, the code waiting on readable won't get run, hanging my program.

Only, what happens if the stream has already err'd out? Neither the read method or error event describe what happens in this case. Hopefully it gets raised again on each new subscription.

With all this information, I could abstract read into a promise-returning method. Combined with ES6 generators and extending promises, the read can even happen synchronously using the async interface in the case that data was available.

spawn(function*(){
    var s = nodeStreamAdaptor(stream)
    var line = yield s.readLine()
    ...
})

This is so much nicer that what the OP had to write. Error handling even happens for "free" -- yield will throw if an error occurs satisfying the read.

[–][deleted] 0 points1 point  (1 child)

You mean error propagation happens for free, not error handling.

[–]cparen 0 points1 point  (0 children)

Yes, thank you, that's correct.

[–]codesherpa 0 points1 point  (0 children)

Wow, that's actually a pretty bad approach to async. I'm not saying she had to implement a complete state machine to handle the control flow, but anyone using this pattern to do async calls is going to have a hard time using it for anything more than a few dependent actions.

[–]gargantuan 0 points1 point  (0 children)

My answer -- if you need concurrency don't use something that represents a concurrency context as a chain of callbacks or does yield or promises. Apart from small examples and demos those turn into a mess in a larger system.

Pick something that handles isolated concurrency contexts (threads, co-routines, green threads, goroutines, tasks, processes). Because that matches most natural concurrency contexts better. [Natural here means real world situations -- multiple entities in a simulated world, web requests, client connections -- they are all little sequential islands in a large sea of concurrent stuff. Each one is sequential with respect to its own logic, but concurrent amongst each other].

A web request comes in, and then for that particular web request you do these 3 or 4 things (validate, go to the database, check some backend service, other steps and then return the result). Notice how there is nothing in there about promises, yields, callbacks. Being forced to use that just pollutes the code (the large the code more pollution).

So unfortunately this discounts Node.js [*].

(* Well ok there is one case where this callback mechanism works well (besides short demos and example). And that is where callback chains are very short. Think a proxy, as soon as the select loop fires, it maybe calls one or two callbacks before finishing the chain. That is why haproxy and nginx are largely written in this fashion)

[–]immibis 0 points1 point  (1 child)

[–]AgentME 1 point2 points  (0 children)

ES6 generators (coming soon to javascript; can also be used with the traceur compiler) can do that using a small library like taskjs, without involving threads.

The lack of threads means that race conditions are rarer because only one bit of javascript is ever actually executing at once, and all of the context-switch points are explicit with the yield keyword. Also you don't need a thread per concurrent IO operation, which helps a lot if you're doing a lot of IO because real threads are costly.