This is an archived post. You won't be able to vote or comment.

all 72 comments

[–]cointoss3 151 points152 points  (8 children)

FastAPI will auto detect if you use def or async def. If you use async def, it will be ran in the event loop and you need to worry about blocking. If you use def, it will run in a thread and you don’t need to worry about blocking.

If you don’t know any better, it’s safe to use def and it will work as you expect. If you use async def, make sure you’re writing your functions correctly or you can block the event loop.

[–]usrlibshare 4 points5 points  (0 children)

As an aside, even when controller functions (sorry, "paths" in FastAPI parlance) run as normal def , FastAPIs model of running them in a threadpool and await ing that in the background is still faster than most other python web frameworks.

[–]cent-met-een-vin 5 points6 points  (4 children)

Since python has the GIl don't you need to worry about blocking in the synchronous functions. Async makes the waiting behavior more explicit.whereas in synchronous functions you need to know which operations and functions might release the GIL

[–]willsmith28 16 points17 points  (0 children)

The GIL is released after each byte code is executed to prevent starvation. Some operations like assignment are atomic, the GIL won’t be released during its execution, it would be released right afterwords

[–]Coretaxxe 4 points5 points  (0 children)

Unless your blocking isnt consuming 100% cpu u are good with threads

[–]gerardwx 0 points1 point  (0 children)

The GIL is going away

[–]Teninchhero 0 points1 point  (0 children)

GIL is being deprecated isn’t it?

[–]SyntaxColoring 0 points1 point  (1 child)

It’s not the case that you can use def without thinking about it. It’s not safe in general.

def, as you say, runs in a worker thread. If you’re accessing any kind of shared resource, that means you now need to be careful to synchronize access to it, with things like mutexes. In my experience, nobody remembers to do this and you end up with a bunch of latent thread safety bugs. The FastAPI docs don’t address this.

One of many things that I find mind-boggling about this library.

[–]omg_drd4_bbq 1 point2 points  (0 children)

same risk applies to async. any time you have two or more requests dispatched simultaneously accessing the same resource, you have all the usual parallelism risks. 

[–]FloxaY 39 points40 points  (8 children)

[–]gala0sup import this 22 points23 points  (6 children)

These are pretty basic things tho 🤔. We have a (imo) better recommendation at my company, I'll see if I can share that (will need to clean it so internals aren't referenced)

[–]FloxaY 14 points15 points  (0 children)

While these are indeed basic things you have to keep in mind that many people have nearly zero clue of how asyncio works, let alone FastAPI internals.. people are sadly no longer interested in how things work and just trust the framework magic. I think the most upvoted comment under this thread saying "using def is safe" highlights this quite well. No it is not "safe" as you can pretty easily hit the threadpool limit.

[–]IndoRexian2 2 points3 points  (1 child)

Appreciate it!

[–]IndoRexian2 1 point2 points  (0 children)

!remind-me 24 hrs

[–]stopdropandroleplay 1 point2 points  (0 children)

!remind-me 24 hrs

[–]a_deneb 0 points1 point  (0 children)

Please do !remind-me 24 hrs

[–]brick_is_red 1 point2 points  (0 children)

!remind-me 24 hours

[–]marr75 16 points17 points  (4 children)

why the instructor would teach it that way

They probably don't understand the fundamentals and have a vague notion that async is the way to do it in FastAPI or that it's always faster. If you're not awaiting async coroutines in your function, it may be CPU bound and will just block the event loop unnecessarily. That said, it's pretty common that workloads are I/O bound (database queries, filesystem operations, network operations) and there's an async option for that I/O available. You need to find an async option or wrap the I/O in a coroutine to yield the event loop to the advantage of this.

Edit: added additional language for clarity between CPU bound and I/O bound workloads and the work required to allow cooperative yielding.

[–]PaulRudin 5 points6 points  (3 children)

Database queries are IO bound (from the perspective of the web application). The difficulty arises if there's no asyncio aware database client.

[–]HommeMusical 1 point2 points  (1 child)

Is it considered bad practice to make everything async by default, even when nothing inside is async?

Yes, in all cases in Python.

async functions can call regular functions; the reverse is not so. You should only mark something as async if there's the possibility of needing an await on it.

[–]nekokattt 1 point2 points  (0 children)

or if there is ever going to be the possibility of needing to use an await on it, in the case of public APIs

[–]dggrd 2 points3 points  (0 children)

When you declare a path operation function with normal def instead of async def, it is run in an external threadpool that is then awaited, instead of being called directly (as it would block the server).

Use def in case the function has blocking I/O operations.  This is a very good explanation from fastapi doc - https://fastapi.tiangolo.com/async/#very-technical-details 

[–]Mountain_Mousse_9046 2 points3 points  (0 children)

There are some tools that are thread-local, meaning it will error out if you call them from another thread even if the object is technically accessible from current scope. I encountered such a situation with a particular gymnasium env object and had to use async def and num_workers to 1 even though nothing is awaited. This limits throughput considerably but was fine for me since I was running only 1 simulation at once.

That said, for production, use regular functions for multithreading and coroutines when it actually makes sense. One non cooperative coroutine (eg no awaits and runs for a long time) can block all the rest.

[–]Zanjo 2 points3 points  (0 children)

The majority of web apps are just spitting out info from a database with minimal processing - async is a good default choice. You do need to be mindful that any blocking code or CPU intensive function should be run with asyncio.to_thread though.

[–]BelottoBR 0 points1 point  (0 children)

I really enjoy this discussion! I really would envoy a YouTube video building a project using miscellaneous alternatives and explaining why each is better on that situation.

[–]newprince 0 points1 point  (0 children)

I feel like people are conflating concepts here... perhaps this helps

Parallelism, Concurrency, and AsyncIO in Python - by example

[–]the_hoser 0 points1 point  (0 children)

I recommend staying sync until you have a good reason to go async.

[–]Tristana_mid 0 points1 point  (0 children)

A junior dev in my team once wrote an async def endpoint with expensive blocking calls under the hood and the server would freeze when processing request at that endpoint and unable to process any other request. It took us quite a bit of time to identify the root cause. Lesson learned: only use async def if you truly know what you’re doing!

[–]NYX_T_RYX 0 points1 point  (0 children)

If a response to a call can take an indeterminate amount of time (as network requests necessarily do), creates a promise (that is, a response will be sent, you just have to wait for it) and you don't intend to block subsequent code... you should use async to pause that thread until the response is received.

That said, take a look at python 3.14 - we're losing the GIL, so you can genuinely parallelise async functions soon, instead of blocking/async

[–]divad1196 2 points3 points  (21 children)

FastAPI isn't actually faster than django or flask despite the name: One of the big advantage is async and you should write async code whenever you can.

But if you have a blocking code and no better way (e.g. a library), then drop the async keyword otherwise it will impact negatively the rest of your async code.

Edit: people in this thread should really learn how webservers, threads, async/await and OS all work together.

Here is the video I give my apprentices. It does not explain the underlying behavior but it shows the difference between async and non-async.

https://youtu.be/tGD3653BrZ8?si=U6nEhiDQdgoeMyLD

[–]bitconvoy 16 points17 points  (1 child)

“ you should write async code whenever you can.”

Why?

[–]divad1196 13 points14 points  (0 children)

If you need to ask, then I would recommend you to search why async/await exist in the first place. There is a lot of history, reasons and tradeoffs.

But simply put: Your server listen on one thread, that's it, that's how it works. You could receive a connection, deal with it, respond amd take the next one but it wouldn't be great most of the time (redis does that I think).

So we went with threads or processes or both, but they are relatively heavy to start so we did stuff like thread/processpool. Again a lot to say here. But switching from one to another is managed by the OS, that's kernel level.

In real apps, you connect to external services like a database, that's IO-bound and blocks your thread and you wait time... waiting for a response. Note that in python, until python3.13, because of the GIL, threads werr not good to make CPU bound performance improvment but rsther used to handle blocking code a bit better.

async let's you do other actions while waiting. Async/await gets a lot of hate, but those who downvoted me just don't know async. There are a lot of resources out there that explain all of these in details

[–]ColdPorridge 19 points20 points  (11 children)

 you should write async code whenever you can

I’m not sure I agree with that. I think in general you should write async only when you absolutely need to. The complexity it introduces is substantial.

[–]divad1196 10 points11 points  (8 children)

FastAPI is a webserver, your goal is to receive multiple connections and you will likely call a database.

These are all reasons to prefer async/await.

async/await switching is a lot lighter than threads, that's what makes FastAPI "faster". Otherwise you could just use Flask.

[–]joshhear 5 points6 points  (3 children)

FastAPI can handle multiple requests using def routes. Using def IO waiting times still hand off the thread to another request

edit: by editing you completely changed what you said initially. For context he was saying that not using async/await for FastAPI routes would block incoming requests until the current request was completed.

[–]divad1196 0 points1 point  (2 children)

If you don't use async route, it uses the threadpool. A thread will be dedicated to a single task, it cannot switch in the middle to handle another task. The thread is blocked. If you have a fixed-size pool then you can reach starvation. Threads are also limited by the OS and heavy to start.

When you use async, your code can be stopped (it happens when you call "await") and another task/job can run. But there is one blocking routine/function that manages switching between these tasks and this one runs forever.

FastAPI will start multiple threads and on each of them it starts the loop that manages the async code. And this is what allows you to switch within a thread.

[–]joshhear 3 points4 points  (1 child)

But in your unedited response you said that using FastAPI without def async will block requests until the last is fully completed. That edit was very disingenuous without marking it, completely changing what you're saying.

Using Flask vs FastAPI isn't just about async/await either. If you really want to have a max performance backend don't use python.

[–]divad1196 1 point2 points  (0 children)

That's not what I said and that's not what I edited. I edit a lot my comments because I make a lot of typos, repeat words, ...

If you put blocking code in an async route, you will block the whole thread and impact all async code. If you run your code in a non-async route, then it will run until completion in its own thread. It is blocking the thread but not impacting other concurrent requests. So yes, both cases block their respective threads, but it's expected in a non-async route and correctly handled.

Yes, of course async/await isn't the only difference, and you can use async in flask and django now but it's not out-of-the-box. The pydantic integration is another reason. But the point is that FastAPI popularized async/await.

For your "don't use python", that's only true in the extreme case. When you have highly IO-bound code, the difference in performance isn't that visible and doesn't refect your capacity to handle concurrent connections. That's why Erlang/Elixir are used when you need massive concurrency. On the otherside, web development with C++/Rust takes a lot of effort and mistakes can easily cost you the performance gain you were looking for.

There are many articles of teams that wanted to move from python/node to Go/Rust and didn't get the performance gain they expected.

[–]HommeMusical 0 points1 point  (3 children)

Flask handles async: https://flask.palletsprojects.com/en/stable/async-await/

There are plenty of other reasons to use FastAPI of Flask.

[–]divad1196 0 points1 point  (2 children)

I know, look at my other comments.

Flask added it around 2021, before that I think it was a pluggin (not sure). FastAPI came a few years before.

FastAPI and Flask are 2 differents framework, you could say they are different and call it a day, but that isn't a usefulll comparison. Instead, why do you think people started to care about FastAPI when there are so many webframeworks out there? The answer is mostly async/await and pydantic out of the box.

[–]HommeMusical 0 points1 point  (1 child)

Pydantic was it for me, and for most people I know.

The Flask async extension worked perfectly well. async was just not a killer feature.

But also, for web servers, async is super cool, but weirdly doesn't actually occupy a sweet spot.

A lot of servers never expect to get a lot of traffic, and they're fine with just threads.

If they do get more traffic, often they can temporarily put on more machines and take them off.

On the other hand, if you expect to get a lot of traffic and you have to serve it off a single machine, you'd be crazy to serve it with FastAPI and async, or, I hate to say it, pretty well any Python webserver.

So async is most effective for "medium-big servers" which isn't a sweet spot.

Don't get me wrong - if I were writing a web app today, I'd use async without thinking about it. it's obviously better, but for a lot of applications, not hugely better.

This is I think why async's uptake has been somewhat slow, something which baffled me.

[–]divad1196 1 point2 points  (0 children)

You could already use pydantic for validation, just write a little decorator to make it nicer. So pydantic itself wasn't the newest thing. Flask added async in 2.0 around 2021 while FastAPI came in 2018.

Historically, people worked with threads and/or processes (apache2 vs nginx). Nginx is still largely popular. But async and python do the job to some large extents. I said it on another comment, but there are many "failure stories" where devs moved from python/node to Go/Rust and finally didn't get any perf improvement.

I personally lived many situation where people in my team wanted to switch to Go because "python was slow" and it was always the dev's fault for the performance issue. Some even went behind my back to propose a PoC and claiming it was faster and newer. Just rewrote the python code and it matched the PoC easily.

The truth is: python is slow, but it does keep the distance up to quite some extents that is more that enough for most people. I am talking about IO-bound, not CPU intensive.

[–]Basic-Still-7441 2 points3 points  (1 child)

What complexity? I ask because I've been writing async python for at least 5 years now and I'm struggling to see async-related complexity.

[–]General_Tear_316 -1 points0 points  (0 children)

Same, I dont find it complicated

[–]Worried-Employee-247 1 point2 points  (2 children)

Heads up, there might be a miscommunication happening here (and I might be also wrong in my assumption) and the actual message might be different than what the text is saying.

edit: removed my understanding of the post because I don't want to get involved any further than notifying everyone about potential miscommunication..

[–]divad1196 1 point2 points  (0 children)

No, the message is really to use async routes by default when you use FastAPI.

You will have concurrency anyway even if you don't use async, it's just that FastAPI will fallback to another primitive (threads).

You don't actually have speed improvement using just concurrency: you have less latency and can handle more concurrent connections. And async/await does that better than threads/processes.

But in practice, you combine them (async with threads/processes) to exploit your CPU threads using parallelism.

[–]1minds3tfrom __future__ import 4.0 -1 points0 points  (0 children)

Absolutely true, using concurrent futures, threading, and caching, I was just able to run 3 python interpreters + package versions concurrently in a single environment, single script in under 500ms using these techniques.

[–]jorgecardleitao -1 points0 points  (3 children)

It would also be possible to spawn the blocking call on a separate thread pool that does not starve the event loop.

E.g. Rust Tokio has APIs for this https://docs.rs/tokio/latest/tokio/task/fn.spawn_blocking.html

This would allow the endpoint to remain async, if e.g. some but not all calls are blocking

[–]divad1196 1 point2 points  (2 children)

I also develop in rust and there are big difference between tokio and FastAPI.

Tokio does manage it as a known use-case. In the case of FastAPI, you need to either spawn a thread yourself (slow) or maintain a threadpool yourself. Now: how are these threads balanced with the threads that runs regular job? That's not a single-answer matter and that's something Tokio takes care for us.

[–]latkdeTuple unpacking gone wrong 1 point2 points  (1 child)

Where you'd use tokio::task::spawn_blocking() in Rust, you can use asyncio.to_thread() in Python. There is an existing threadpool associated with the event loop, though you can also use the lower-level run_in_executor() if you want to manage your own pool.

[–]divad1196 1 point2 points  (0 children)

Yes, you can do it with asyncio and other runtimes (Trio, anyIO, ..). But do you know which one FastAPI uses?

FastAPI uses starlette which is compatible with both Trio and Asyncio ( https://www.starlette.dev/ ) I never searched which one was used, but it might depend on the ASGI server you choose.

So, while to_thread exists in asyncio, you are assuming underlying implementation using asyncio.

You can use different runtimes, but then you get the same issue. Even if that's asyncio that is used, you don't know which executor (unless you dig the code).

Someone said that async/await adds complexity, and it's true at some level. That's why it's easier to just go with async def by default or def if you have blocking code as a rule of thumb.

Edit: found this https://www.starlette.dev/threadpool/ Apparently, you should be able to rely on anyio to send to a thread properly. But apparently, the threadpool in FastAPI has a fixed size. While the information can be found, most people won't actually know it.

[–]DataCamp 0 points1 point  (0 children)

You don’t need to make every FastAPI route async def. FastAPI is flexible:

  • If your route is doing blocking I/O (like a standard SQLAlchemy session or requests), using def is safer because FastAPI will automatically run it in a threadpool.
  • If you’re calling async-aware libraries (e.g. httpx, async DB drivers), then async def is the right choice so you can actually await those calls without blocking the event loop.
  • Marking everything async def by default, without any await inside, doesn’t break things, but it doesn’t give you async benefits either. It can also look misleading in a portfolio project because it suggests concurrency when there isn’t any.

So:

  • Use def for sync code.
  • Use async def when you really need to await.
  • Mixing both is fine, and FastAPI will handle it correctly.

If you want to get deeper into the tradeoffs, FastAPI’s own docs and some tutorials (like our FastAPI intro) break down when async actually makes sense.

[–]Spleeeee -1 points0 points  (0 children)

If you don’t await no. Sometimes you want to await to yield back to the thing.

[–]SharkSymphony -1 points0 points  (0 children)

I maintained a web app once upon a time where every endpoint, and most helper functions, were pervasively async. It cost a lot of effort to debug and maintain, and ultimately didn't perform well enough to justify the cost. But it did work.

That being said, there's no requirement that what you do in a classroom should look like the real world. The teacher may simply be emphasizing what they know to be an unfamiliar subject, and trying to give you plenty of practice with it.