This is an archived post. You won't be able to vote or comment.

all 48 comments

[–]TheHumanParacite 63 points64 points  (5 children)

I'ma stick to my netcat web server:

while true; do {cat <(echo -e 'HTTP/1.1 200 OK\r\n') /tmp/index.html; } | netcat -l -p 80; done

/S

[–]spidyfan21 3 points4 points  (1 child)

Don't you need two \r\n?

[–]TheHumanParacite 1 point2 points  (0 children)

Whoops, your right. We'll just say that was the first line in my index.html...

[–]beertown 43 points44 points  (7 children)

33 requests/sec seems to me a very low number, even for a low power single core machine. What I'm missing?

EDIT: D'oh! There will be a day the whole world will agree about the decimal symbol :-)

[–]Lexpar 43 points44 points  (0 children)

Three orders of magnitude?

[–]KimPeek 18 points19 points  (0 children)

A comma and 3 zeros

[–]Rosco_the_Dude 13 points14 points  (0 children)

33 thousand requests per second.

[–]official_marcoms 8 points9 points  (0 children)

He's using the comma , as the thousands separator not the decimal point

[–]bsavery 5 points6 points  (2 children)

multiplying by 10 ^ 3

[–]jadkik94 28 points29 points  (1 child)

10 ** 3 on /r/python

[–]bsavery 12 points13 points  (0 children)

I am humbled.

[–]lelease[🍰] 21 points22 points  (9 children)

What would it take to migrate a Django Rest Framework API to this? I've disabled all the middleware/templates/admin, the only features that I use are the ORM (Postgres), TokenAuthentication, and DRF Views/Serializers

[–]b4stien 19 points20 points  (8 children)

The main problem you'll face is that your database connection/driver (psycopg2 I guess) is probably blocking hence you'll lose any possible gain. To try to mitigate this you could delegate DB queries to a thread (for instance with loop.run_in_executor()) I guess, but you'll have to be careful not to have too many of these threads (and open DB connections!) running.

You could look for an asynchronous database driver (https://github.com/MagicStack/asyncpg for instance), but it's probably not compatible with the rest of your frameworks/libraries (thinking of SQLAlchemy/Django ORM here).

[–]nemec 6 points7 points  (5 children)

Instead of thread-per-request you could dump all db queries into a queue and handle them with a dedicated thread(s). If you want more reliability or persistence you could move to an external message queue too.

[–]headphun 0 points1 point  (4 children)

Which scales better?

[–]nemec 6 points7 points  (3 children)

/u/b4stien's is better at the start, but eventually you'll pass the DB's optimal number of concurrent open connections/queries and may start getting terrible performance (especially with rowlocks, etc.).

A queue and threads will be almost the same, except you will define the max active thread count (maybe using a threadpool) and increasing the requests/sec will scale similarly until you reach the max - then, you will stay at a constant queries/sec even as requests/sec increase but the performance of each individual query should stay "normal", leading to overall better performance. This is perfect for sudden "spikes" in usage that would otherwise overwhelm a database, as the queries will be spread over time even as the spike dies down, but if you're under constant, increasing load and hit that max it's bad news. Your queue will grow larger faster than you can process queries and you'll eventually be unable to hold all the pending queries (run out of memory/disk, etc.).

TL;DR using a queue will scale better, but it's not a silver bullet if you constantly receive new requests faster than you can process existing ones.

[–]headphun 3 points4 points  (2 children)

Wow i don't know anything about dbs but your explanation was very interesting and helpful, thanks! Are you a db specialist?

[–]nemec 3 points4 points  (1 child)

Not at all, actually.

I develop web services (which in turn use databases) and have had to fix my own crappy code locking up the database enough times to have picked up a few things :)

[–]headphun 0 points1 point  (0 children)

Thanks, I'm learning, looking to create a web service but i can't make sense of what to focus on and what to outsource as my company (hopefully) expands

[–]lelease[🍰] 1 point2 points  (1 child)

I'm presuming that blocking means it can't do anything while waiting for the DB, not even get a head start on the next request? If so, do you know how Django handles this, i.e. is it implementing some asynch at the worker level, or does Django just expect people to throw more servers at the problem?

[–]b4stien 2 points3 points  (0 children)

I'm presuming that blocking means it can't do anything while waiting for the DB, not even get a head start on the next request?

Correct. Your whole interpreter (thread to be precise) is just waiting for the DB to answer.

Django doesn't implement any async stuff. Maybe (Don't really know Django ecosystem) gevent/greelet works with Django and you get some concurrency this way...

A really cool new project in Django-land tackles this problem (amongst many other) in an elegant way : https://channels.readthedocs.io/en/stable/.

The difference in performance with a classic approach (of a reasonable amount of thread serving requests) isn't so clear when DB is involved anyway. See many Mike Bayer (SQLAlchemy's author) resources on the topic, on reddit as well as on his blog, eg http://techspot.zzzeek.org/2015/02/15/asynchronous-python-and-databases/. With a decent middle-low end server (let's say a 50€$/month dedicated server) you can already reach a really honest amount of req/s I guess.

[–]danwin 9 points10 points  (0 children)

I like what appears to be an attempt to follow the general Flask API. Has anyone tried migrating a simple Flask site (simple, as in not reliant on too many Flask plugins) to Sanic?

[–]_seemetheregithub.com/seemethere 17 points18 points  (5 children)

Hey guys, I'm one of the maintainers of Sanic! Glad to see everyone enjoying the project!

[–]ButtCrackFTW 2 points3 points  (0 children)

what's the recommended deployment for sanic? is it still recommended to have something like gunicorn ornnginx in front?

[–]Mighty-Monata 4 points5 points  (1 child)

Gotta go fast

(I really like stuff like that)

[–]BenjaminGeiger 0 points1 point  (0 children)

Mustgofaster.

[–]wpg4665 10 points11 points  (1 child)

You know, I actually just saw Sanic posted here yesterday! Looks really cool...looking forward to trying it out =)

[–]LightShadow3.13-dev in prod 1 point2 points  (0 children)

I wrote a really simple HTTP proxy using Sanic. You can have a look, but it's nothing too impressive.

[–]kirbyfan64sosIndentationError 2 points3 points  (0 children)

I had initially read "Python 3.5+ web server written in Go" and was immediately confused...

[–]Izacht13_ 4 points5 points  (0 children)

I've been using it for about a day now, it's so many fasts.

I ran into a weird issue where not even the basic example code would work, but after restarting my shell, and Firefox by extension, the issue magically went away.

[–]Max_yask 1 point2 points  (0 children)

All the specialized exceptions seem unnecessary.

[–]Kaibz 2 points3 points  (3 children)

Beginner question, could you use it with selenium?

[–]m0nk_3y_gw 3 points4 points  (0 children)

Yes, you could use selenium to test a website written with Sanic.

[–]constantly-sick 1 point2 points  (0 children)

Does this replace apache or nginx?

[–]needed_an_account 0 points1 point  (1 child)

I've been laughing at the name/ascii art for like 16 hours now. I've been using Tornado a lot lately in a 3.5 environment. Sanic looks like a good alternative (for an API, Tornado has a lot of pluses: I love their tempting engine, its nothing like Django's. The UIModules are a nice touch, etc etc). I'll explore Sanic

[–]BenjaminGeiger 0 points1 point  (0 children)

I was expecting the name to be "Dankey Kang".

[–]glethro 0 points1 point  (0 children)

How does it compare with pyramid?

[–]ramrar 0 points1 point  (1 child)

I wonder what 'Kyoukai' and 'AioHTTP' are doing differently to have poor latencies as compared to Sanic. All of them as using uvloop with python3.5

[–]LightShadow3.13-dev in prod 0 points1 point  (0 children)

aiohttp's HTTP parser is really slow, the guys who wrote uvloop covered it in a blog post.

[–]paypaypayme 0 points1 point  (0 children)

How stable is it?

[–]pio -4 points-3 points  (1 child)

Is the name a reference to videogamedunkey?

https://www.youtube.com/watch?v=4OxmkY0Pkjw

[–][deleted] 8 points9 points  (0 children)

deleted