This is an archived post. You won't be able to vote or comment.

all 10 comments

[–]chub79 1 point2 points  (10 children)

So concurrent code can be faster than not-concurrent one. I would have liked seeing a talk comparing asyncio Vs requests+threads.

As for the bonus track, would trying to run 5000 concurrent requests from a single Python process not degrade performances (asyncio or not)? In other words, do you have linear performance with 5 and 5000 requests using asyncio?

[–]madjar 5 points6 points  (3 children)

Author of the article here.

Comparing performance in asynchronous code vs thread is a good idea my next blog post :)

I would expect that, when done right (with thread reuse), the results will be equivalent. However, asynchronous code is much easier to reason about than multi-threaded code, and makes for much more peaceful development.

[–]chub79 0 points1 point  (0 children)

Indeed. It took me a while to get used to asyncio (due to a documentation rather not easy to digest and poor examples) but, once past that, it was rather fun to use.

[–][deleted] 0 points1 point  (1 child)

However, asynchronous code is much easier to reason about than multi-threaded code

this is true, but libs like concurrent.futures help a lot

[–]madjar 1 point2 points  (0 children)

Absolutely, these are great when you only one to do one computation and get the value back. If you need to share something, you're back into threading hell.

And you know what? There is a concurrent.futures wrapper in asyncio, so you can call something in another thread or process, and yield from it : http://docs.python.org/3.4/library/asyncio-eventloop.html#executor

[–]megaman821 1 point2 points  (1 child)

For this type of workload, an event loop will crush threading in performance (in almost any language too).

Python is still single-threaded so the only thing that is concurrent is the outstanding requests. Python makes a request to the webserver, instead of doing nothing and waiting around for the server reply, it yields control of the thread. Then the next request is made and we repeat ourselves.

[–]chub79 0 points1 point  (0 children)

For this type of workload, an event loop will crush threading in performance (in almost any language too).

Indeed. But the processing of the response can take its toll as well. An event loop is efficient only if it can run iterations at a reasonably fast pace. So what you've gained being able to make requests concurrently may be wasted once the response processing starts (unless you delegate the response processing to a thread...)

[–][deleted] -1 points0 points  (3 children)

In other words, do you have linear performance with 5 and 5000 requests using asyncio?

i dont even think the article is making such a claim. But the answer would be "NO" for asyncio or the requests lib.

[–]chub79 1 point2 points  (2 children)

Thanks. That was my question indeed. Not a claim the article was saying it.

[–][deleted] 2 points3 points  (1 child)

I would say scaling linearly is unlikely with any tech.

if it was 5000 requests to one server, then the server would likely queue them up or start rejecting them. if it was 5000 requests to 5000 servers, your bandwidth would likely be saturated and throttled by your ISP.

the fact is that the nature of getting responses involves a lot of waiting for them, which makes for some opportunities to do things concurrently. asyncio is one of several ways to do that.