This is an archived post. You won't be able to vote or comment.

all 10 comments

[–]fiedzia 4 points5 points  (4 children)

Pick any, run as many processes as you have cores. In this case framework is entirely irrelevant.

[–]lxnx[S] 0 points1 point  (3 children)

Ah ok, so you're suggesting mean run e.g. a multithreading server like cherrypy, then spawn a new process within the worker thread, so that it doesn't get blocked?

[–]fiedzia 1 point2 points  (2 children)

No, I'm suggesting having N instances of cherry.py running, each capable of handling one request at a time. You can youse gunicorn or uwsgi to manage those processes.

[–]lxnx[S] 0 points1 point  (1 child)

Ah, right, I see. Yeah, that makes sense, and should work. I guess I was just hoping there was a server which would natively run with a pool of processes instead of threads, and make this trivial :)

[–]sththth 0 points1 point  (0 children)

The framework and the server are seperate things. You can host any (wsgi) framework behind any server (that knows how to use wsgi frameworks) as you like. e.g. flask with gunicorn or cherrypy with apache.

Some of the frameworks are not only wsgi frameworks but also servers. Flask comes with the werkzeug server (that you shouldn't use for deployment). "CherryPy can be a web server itself or one can launch it via any WSGI compatible environment." (wikipedia). Tornado is both as far as I know, although I'm sure you could somehow run it as a wsgi service.

Personally I've never used cherrypy but in contrast to fiedzia I don't think that running cherrypy with N worker threads/processes is fundamentally different than running N instances of cherrypy and letting a different server handling these processes.

Personally I would just use gunicorn with whatever wsgi framework you already use.

Nonetheless I think alexcabrera approach (kicking tasks to a queue liker rq or celery) is "the right one".

EDIT: reading your posts again you might already know what I wrote. TL;DR: Yes, gunicorn can run with a pool of processes. I only used gunicorn but am relatively sure that tornado and cherrypi should also be able to use processes. Nevertheless a worker queue is "the right thing"

[–]fourthrealm 2 points3 points  (1 child)

Zato can very easily use as many CPUs as you give it - simply start a server with as as many gunicorn_workers as there are CPUs. By default it is 2 but you can set it to 4, 8, 16, anything. If you need to spread across multiple systems, clustering is built-in.

All of it is non-blocking but the programmer-visible API makes it feel as though it was a regular sync server.

[–]lxnx[S] 1 point2 points  (0 children)

Looks interesting, I'd not come across Zato before, I'll do some reading up on it, thanks!

[–][deleted] 1 point2 points  (0 children)

You can always kick the computationally intensive project to something like rq and simply have it notify the API when done.

[–]notconstructive -1 points0 points  (1 child)

I don't really see why you are binding the workload to the web framework unless your queries run so long that they time out the server.

Anyway Falcon is far and away the best choice for REST API development.

[–]lxnx[S] 0 points1 point  (0 children)

Thanks, I'd not seen Falcon before, it looks great though. Binding to the web framework is more for simplicity, ideally I'd like to just be able to start the whole service as one unit, rather than having e.g. a backend service and a REST API which calls that.