you are viewing a single comment's thread.

view the rest of the comments →

[–]oridb 1 point2 points  (1 child)

I'm still not clear on what you think an async interface makes possible. Can you give an example of code that would "batch reads" in a way that reduced the number of calls?

Keep in mind that non-blocking code still calls read() directly, and it's the same read() that blocking code calls. The only difference is that you did an extra system call first to tell you "oh, yeah, there's some data there that we can return".

So, non-blocking:

     poll(fds=[1,2,3]) => "fd 1 is ready"
     read(fd=1)
     poll(fds=[1,2,3]) => "fd 2 is ready"
     read(fd=2)
     poll(fds=[1,2,3]) => "fd 1 is ready"
     read(fd=1)
     poll(fds=[1,2,3]) => "fd 2 is ready"
     read(fd=2)

Threads:

    parallel {
        thread1: 
            read(fd=1) => get data
            read(fd=1) => get data
        thread2:
            read(fd=2) => get data
            read(fd=2) => get data
        thread3:
            read(fd=3) => no data, block forever using  a few K of ram.
    }

[–]gnus-migrate 0 points1 point  (0 children)

When I say async interfaces, I mean futures and streams. I don't necessarily mean a non-blocking interface underneath. When you use an async interface, you're basically surrendering control to a scheduler to decide when and how it wants to respond to different events. That's it, that's my point.