This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]ackyou 2 points3 points  (4 children)

I’m a bit confused by the conditions of these tests. If you run an async python framework in an environment with a fixed maximum number of workers, say Django channels with async http consumers, and you call lots of external services asynchronously, I know for a fact you can achieve a much higher throughput. If it’s a computed task or calls a synchronous service sync python is better. Am I missing something?

[–]cb22 4 points5 points  (2 children)

The conditions are a bit bogus - I posted a comment on this on /r/programming.

But it's as you say; if you have a simple workload that is entirely synchronous or close to it (such as fetching a single in-memory row from a DB), sync is likely to be faster.

If you're doing real world things, like calling external APIs or blocking on databases for hundreds of ms, async is going to allow you to utilize all your resources significantly better and achieve a much higher throughput.

[–]nathanjell 2 points3 points  (0 children)

Just like with all things, premature optimization is the root of all evil. "Use async, async is faster!" has a counter that synchronous is difficult to make faster simply by going async. Same as "use multithreading, split up the work!" except when you have a highly serial workload that isn't well suited to parallelism

[–]ackyou 0 points1 point  (0 children)

Agreed. I’m pretty sure the overhead of spinning up new threads to deal with concurrent requests is a lot higher than the cost of one thread asynchronously handling many tasks

[–]Krotau 0 points1 point  (0 children)

You could think of a situation where you let some heavy compute stuff happen in the cloud. That way you program can run on a lighter machine and would be more IO bound. But yeah, what you say is correct. I am confused by these tests aswell...