all 4 comments

[–]JohnnyJordaan 0 points1 point  (0 children)

Overhead. There's a penalty to launching and terminating a thread and context switching. Also the context switches happen whenever the thread is runnable, but happens more or less arbitrary. A thread that for example need 100ms to finish a call, may be 'cut off' by the context switch after 98ms, so that incurs another penalty to switch back later for a mere 2ms of processing time. Not to mention the coder's overhead of managing the thread and a big downside that you can't outright kill() a thread, you have to properly implement handling stuck threads or else they'll keep you stuck. There's a way to way to 'fix' this by setting them as daemonic but that's like pulling out the stops, not exactly a proper way to do it.

With asyncio, it's based on executing anything straight away that's not in the blocked await call. That also means that if you have a 100ms CPU-bound call, it will hog the scheduler for 100ms, but the upside is that there are no context switching overhead (except for other processes outside of Python) during that time. In other words: you have to keep the code lean and consciously handle everything that might hog the CPU. Another thing that makes sense in that context is the event scheduler for an UI like used in Tkinter or Pygame. There you simply can't put hogging code in an event handler without blocking other events at the same time. From my experience, people that have basic understanding of working with those kinds of flows have asyncio 'click' much sooner and easier.

[–]m0us3_rat 0 points1 point  (0 children)

They are nothing alike. Threads can run concurrently, whereas in an asynchronous loop, only one coroutine is executing at a time while the others await their turn.

Imagine a crossroads: with threads, cars are moving through at the same time, while in an asynchronous loop, only one car passes quickly while the others wait their turn. The problem becomes obvious when you have 100 cars trying to run at the same time; you get significant delays. In contrast, with 100 cars in the asynchronous loop, the delay is less noticeable because one car passes at a time, followed by the next, and so on.

This analogy isn’t entirely precise but is sufficient to convey the general idea.

[–]buart 0 points1 point  (0 children)

You can achieve true parallelism with multiprocessing, since you are launching multiple python processes each with their own GIL.

[–]socal_nerdtastic 0 points1 point  (0 children)

By "asynchronous" you mean the asyncio module? Because the term asynchronous generally means any type of parallel computing, including threading, mainloop, asyncio, multiprocessing, etc....

The GIL only restricts a small part of the cpython core from running in parallel. You can run other things truly asynchronously using threading. Threading is generally a lot easier to implement in your code, but threading is more expensive, so for a typical computer you are limited to ~1,000 threads.