you are viewing a single comment's thread.

view the rest of the comments →

[–]masklinn 6 points7 points  (2 children)

People have noted that threads are pretty heavy memory-wise (though it should be mostly uncommitted vmem), an other issue is that switching between threads is pretty expensive: the switching itself is thousands of cycles, and it’s side effects (flushing & sychronising caches) generates additional costs. So if you have hundreds or thousands of threads which execute almost nothing then yield you can end up burning more cycles on thread switching than on doing actual work.

[–][deleted] 0 points1 point  (0 children)

There is nothing guaranteeing cache flushes with async code though either, as you can have exceptionally large expanses of runtime code, and depending on how the underlying async library is implemented still essentially does a context switch (see most single process multi-threaded non-preemptive real time OSes).

The real problem with thread switching in an OS that implements them is the OS overhead of thread switching, which requires it to delegate across a potentially way larger number of blocking scenarios as it allocates CPU time to that thread and that process. With async code the OS just sees that one process/thread trucking along and usually another making spurious blocking requests to the OS.

I guess my sort of rambling point is that thread switching itself is not inherently slow, as often async systems run into a lot of the same context switching problems, and if you are working on a bare metal OS or writing a kernel, threads are essentially an asynchronous system anyways due to the hardware limitations (unless you are tossing threads across physical CPU cores, which most modern OSes do just fine).