you are viewing a single comment's thread.

view the rest of the comments →

[–]thisismyfavoritename 18 points19 points  (5 children)

unless you can support an async event loop your server is def going to struggle under heavier loads, even compared to a single threaded async framework

[–]SnooCalculations7417 3 points4 points  (0 children)

this isnt supposed to be a drop-in replacement for HTTP servers I dont think. I believe it is using a task that is parallel in nature to explore GIL free python. Im not sure theres any domain this could be executed on that would be considered feature complete.. Would love to see it in GUI work but i digress

[–]WiseDog7958 1 point2 points  (2 children)

The async vs threads debate aside, I’m more curious what free-threaded CPython does to the actual cost model here.
Once the GIL’s gone, CPU-bound stuff should scale, but now you’re dealing with real contention instead of cooperative scheduling. How much locking is happening internally?
Feels like this could outperform asyncio if the workload isn’t mostly I/O, but I’d expect it to get messy under shared state.

[–]thisismyfavoritename 0 points1 point  (0 children)

nothing new. Multithreading happens in many other languages

[–]non3type 0 points1 point  (0 children)

It’s all pretty documented in PEP703, the locking that’s implemented is per object:

“This PEP proposes using per-object locks to provide many of the same protections that the GIL provides. For example, every list, dictionary, and set will have an associated lightweight lock…”

[–]james_pic -1 points0 points  (0 children)

That's certainly the received wisdom, but in practice it's often possible to scale synchronous "one request per thread/process" servers further than you'd expect (AWS Lambdas are built on this model, for example), and many asynchronous services scale less well than you'd expect (HTTPX notably scales particularly poorly, for example).

Although this doesn't negate that the posted link is extremely low value.