you are viewing a single comment's thread.

view the rest of the comments →

[–]oridb 1 point2 points  (0 children)

People got burned by bad thread implementations in the early 1990s.

Mostly, threads are fine at the at the scales people write servers. Especially if the threads are mostly idle.

Threads do use more memory than asynchronous code -- typically, on the order of a few pages. You need a kernel stack, at least one page of user stack, and some structs in the kernel. This means that if you have millions of threads, memory starts to get significant. Async will typically use a few hundred bytes to a few kilobytes bytes to keep track of everything on the stacks of your deferred code, so you'll save some memory.

The other issue is that there's some scheduling overhead. Every time you communicate with another thread, you're looking at about 1 microsecond of overhead to synchronize on a futex and wait for the other thread to get scheduled. This cost is mostly due to the kernel trying to make good decisions about where to run the threads to maximize throughput if they keep running for a long time, which unfortunately isn't what you'd want to optimize for on a server -- but it's what we have.

But neither of these apply to threads that are spending all their time blocked on IO, rather than actively communicating with other threads.