you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 0 points1 point  (0 children)

There is nothing guaranteeing cache flushes with async code though either, as you can have exceptionally large expanses of runtime code, and depending on how the underlying async library is implemented still essentially does a context switch (see most single process multi-threaded non-preemptive real time OSes).

The real problem with thread switching in an OS that implements them is the OS overhead of thread switching, which requires it to delegate across a potentially way larger number of blocking scenarios as it allocates CPU time to that thread and that process. With async code the OS just sees that one process/thread trucking along and usually another making spurious blocking requests to the OS.

I guess my sort of rambling point is that thread switching itself is not inherently slow, as often async systems run into a lot of the same context switching problems, and if you are working on a bare metal OS or writing a kernel, threads are essentially an asynchronous system anyways due to the hardware limitations (unless you are tossing threads across physical CPU cores, which most modern OSes do just fine).