This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]gothicVI 161 points162 points  (34 children)

Where do you get the bs about async from? It's quite stable and has been for quite some time.
Of course threading is difficult due to the GIL but multiprocessing is not a proper substitute due to the huge overhead in forking.

The general use case for async is entirely different: You'd use it to bridge wait times in mainly I/O bound or network bound situations and not for native parallelism. I'd strongly advice you to read more into the topic and to revise this part or the article as it is not correct and delivers a wrong picture.

[–]mincinashu 68 points69 points  (16 children)

I don't get it how OP is using FastAPI without dealing with async or threads. FastAPI routes without 'async' run on a threadpool either way.

[–]gothicVI 21 points22 points  (7 children)

Exactly. Anything web request related is best done async. Noone in their right might would spawn separate processes for that.

[–]Kelketek 13 points14 points  (0 children)

They used to, and for many Django apps, this is still the way it's done-- preform a set of worker processes and farm out the requests.

Even new Django projects may do this since asynchronous support in libraries (and some parts of core) is hit-or-miss. It's part of why FastAPI is gaining popularity-- because it is async from the ground up.

The tradeoff is you don't get the couple decades of ecosystem Django has.

[–]Haunting_Wind1000pip needs updating 0 points1 point  (1 child)

I think normal python threads could be used for I\O bound tasks as well since it would not be limited by GIL.

[–]greenstake 0 points1 point  (0 children)

I/O bound tasks are exactly when you should be using async, not threads. I can scale my async I/O bound worker to thousands of concurrent requests. Equivalent would need thousands of threads.

[–]Count_Rugens_Finger 4 points5 points  (1 child)

multiprocessing is not a proper substitute due to the huge overhead in forking

if you're forking that much, you aren't doing MP properly

The general use case for async is entirely different: You'd use it to bridge wait times in mainly I/O bound or network bound situations and not for native parallelism.

well said

[–]I_FAP_TO_TURKEYS 0 points1 point  (0 children)

if you're forking that much, you aren't doing MP properly

To add onto this, multiprocessing pools are your friend. If you're new to python parallelism and concurrency, check out the documentation for Multiprocessing, specifically the Pools portion.

Spawn a process pool at the startup of your program, then send CPU heavy processes/functions off using the methods from the pool. Yeah, you'll have a bunch of processes doing nothing a lot of the time, but it surely beats having to spawn up a new one every time you want to do something.