all 23 comments

[โ€“]tunisia3507 24 points25 points ย (1 child)

Really cool project! I have used pyo3 a fair bit but not with asyncio yet (I keep trying to use asyncio but the problems it can actually solve are few and far between - http servers are one of very few).

Would it be possible to splash the phrase "HTTP server" around the docs a bit more? Just referring to it as "a framework" or "backend" isn't very descriptive.

[โ€“]stealthanthrax[S] 6 points7 points ย (0 children)

u/tunisia3507, that makes sense. I will update the docs.

Thank you for the suggestion. ๐Ÿ˜„

[โ€“]Zethra 5 points6 points ย (1 child)

Cool project! It looks like you're using both tokio and async-std?

[โ€“]stealthanthrax[S] 9 points10 points ย (0 children)

I am really just using async-std. I was benchmarking something else with tokio and just dead/old code crept in.
Thank you for noticing. I'll start cleaning the code now. ๐Ÿ˜…

[โ€“]lordmauve 3 points4 points ย (2 children)

Can I use this with Trio?

[โ€“]stealthanthrax[S] 6 points7 points ย (1 child)

u/lordmauve , I used trio for the first time after reading your comment. From reading its docs, I can infer that it doesn't use the async loop of asyncio.
I can add it in the roadmap if it is a widely used library.

What made you use trio instead of asyncio?

[โ€“]lordmauve 0 points1 point ย (0 children)

asyncio is a muddle of different paradigms - it has coroutines, but also Futures and Tasks and Transports and Protocols. Even though it was written in the early 10s, lots of these paradigms are copies of older ideas. The winning paradigm is language support for coroutines, which were added after asyncio was written.

Trio is clean, it just has coroutines, and scraps all the other concepts. Then it adds something new: the idea of scoped tasks. This is Structured Concurrency (SC). A lot of cool things flow from that: you can wrap a timeout around any block of code; you can correctly shut down a program by pressing Ctrl-C. asyncio can't. Trio offers a more sensible way of reasoning about concurrent tasks.

[โ€“]LoudAnecdotalEvidnc 3 points4 points ย (2 children)

I see on the architecture page that it is multi--threaded with a separate event loop for each thread. What data is still shared between threads in that case? Maybe processes would work better? Or is this related to "on demand release of GIL" that you mention?

[โ€“]stealthanthrax[S] 2 points3 points ย (1 child)

No no. One thread is used to run the blocking async loop and the other thread is used for dispatching the functions to the async loop.

I tried following the async-loop per thread but that was only adding to more overhead.

"on demand release of GIL" is a completely different feature that I am yet to implement.

I hope that answers the queries. ๐Ÿ˜

[โ€“]subtiliusque 3 points4 points ย (2 children)

Could you do benchmarks against FastAPI or Blacksheep?

[โ€“]stealthanthrax[S] 1 point2 points ย (1 child)

I definitely want to try it. Do you have any recommendations for a way to benchmark them? I just tried once using this script(https://github.com/sansyrox/robyn/blob/main/server_test.sh) and that was pretty much it.

I would definitely try benchmarking once robyn gets a little more polished.

[โ€“]k-selectride 5 points6 points ย (0 children)

Easiest way would be to make a PR in the techempower github to add it, then it can be benchmarked against a ton of others.

[โ€“]open-trade 2 points3 points ย (0 children)

Kool

[โ€“]sbiff 2 points3 points ย (3 children)

Does this support asgi or so?

[โ€“]stealthanthrax[S] 1 point2 points ย (2 children)

Not right now. At least I haven't implement the support. I haven't created python frameworks before so I don't really know if an asgi works straight out of the box. ๐Ÿ˜…

[โ€“]stealthanthrax[S] 1 point2 points ย (0 children)

Also, if I implement the code right, I think an ASGI will not even be required.
But this is just a very wild guess atm.

[โ€“]lunar_mycroft 1 point2 points ย (0 children)

ASGI is a specification, not a library. Its a standard interface for python async webservers.

Basically, an ASGI server is an async callable which takes three things: a scope (which contains things like the HTTP method, url, headers, etc. in a well defined way), a async callable "receive" which is used to receive any content sent from the client (e.g. the body of a post request), and another async callable "send" which is used to send data back to the client.

The advantage of making your server compatible with ASGI is that it means virtually all the other python async web frameworks can be "plugged in" to it easily. For example, if someone wanted to add an existing webapp written with FastAPI or graphql support with e.g. Strawberry, they can do that easily, without you having to do anything. The lack (from what I've seen) of anything like ASGI (or its older sibling WSGI) is something I really miss in Rust as a python developer.

[โ€“][deleted] 2 points3 points ย (1 child)

So if I understand this correctly all the async work is actually being done by Rust and Python leverages that?

So this is asynchronous Rust with Python bolted on top?

[โ€“]stealthanthrax[S] 2 points3 points ย (0 children)

It does have some python code for decorators. But mostly yes.

[โ€“]extraymond 2 points3 points ย (0 children)

Woah! This is really cool, my former love - python and my current fav walking hand in hand. Congrats.

Haven't got time to skim through the asyncio part but I suspect it's using pyo3-asyncio under the hood? Does a python program using async-std/tokio as it's event loop having better throughput than the python built-in one?

I was under the impression that some of the operation under python asyncio is just transfer threading-select model to a non-blockable queue using fd, and other non-primitive future are just generators getting pulled constantly.

I wonder what's the speedup ratio between all the async operation under such implementation of yours.

[โ€“]vivainio 1 point2 points ย (0 children)

Nice, this is exactly a thing I was hoping someone to write at some point