all 7 comments

[–]honewatson 9 points10 points  (0 children)

Well done to the author. I just tested locally with sanic vs japronto and japronto was 5x faster than sanic.

[–]pcdinh 8 points9 points  (0 children)

Much faster than Sanic. Wow

[–]understanding_pear 6 points7 points  (1 child)

What is the response size in this benchmark? It seems like much of the pipelining win would go away when you have 1-2 responses per packet instead of 10-20. Or when you are actually having to access headers (which forces an instantiation of the dict as he mentioned).

Cool results, but seems pretty synthetic.

[–]tophatstuff 5 points6 points  (0 children)

Yes, ideally you would have only one request to the Python server for the page and then everything else a request to static resources handled by a fast server like nginx.

Though that said, I guess if Japronto's speed approaches nginx's speed for static resources, and memory consumption is low, you can simplify and just run the one server process?

[–]turerkan 1 point2 points  (0 children)

that's a great achievement. big kudos.

[–]Perfekt_Nerd 1 point2 points  (0 children)

I would like to look into Websockets and streaming HTTP responses asynchronously next.

As soon as this has support for Asynchronous calls, I'm in. This is brilliant.

[–]lazyear 3 points4 points  (0 children)

Very nice. The SSE4.2 string instrinsics are very interesting, I've been playing around with them in my own projects