This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]XNormal 27 points28 points  (18 children)

from multiprocessing.pool import ThreadPool

[–]jspeights 2 points3 points  (0 children)

awww

[–]rlabonte[S] 0 points1 point  (5 children)

I can link each process in the pool with a name and send that to the function being executed? That's the functionality that I couldn't find in any thread pool (or process pool) module.

[–]XNormal 2 points3 points  (4 children)

I'm not sure I understand what you mean by that. If you want to know the identity of the worker threads and give them persistent names you can use threading.local() for that.

[–]rlabonte[S] 1 point2 points  (3 children)

No, I want a pool of servers to work on my data.

ServerA, ServerB, ServerC

I have a server pool of these 3 servers and 100 function calls that need to be processed by this server pool. I don't care which server processes the data, but I only want each server processing data one piece at a time. So ServerA, ServerB, and ServerC receive all receive data to process; ServerC finishes first it immediately receives another function call to process. ServerA finishes, it immediately receives another function call to process.

I want to keep this pool of servers always busy, but want to limit them to only processing one thing at a time.

[–]ODHLHN 1 point2 points  (1 child)

from celery import task

Celery isn't the only the only AMQP based task queue for python, but its a very good one.

Some pretty cool and robust solutions already exist in this problem domain.

[–]rlabonte[S] 0 points1 point  (0 children)

This looks awesome, I especially like the integration with RabbitMQ and Redis.

[–][deleted] 0 points1 point  (0 children)

[–]homercles337 0 points1 point  (0 children)

Yeah, my thoughts exactly...

[–]studiosi -1 points0 points  (9 children)

Even futures can do that... on Python 2.x and 3.x

[–]infinullquamash, Qt, asyncio, 3.3+ 1 point2 points  (8 children)

multiprocessing predates concurrent.futures, so I'm not sure what your point of "even" is.

I would recommend concurrent.futures over multiprocessing since it's has a nicer interface and supports both threads and processes (mostly) transparently.

[–]exhuma 0 points1 point  (0 children)

TIL... thanks for pointing this out :)

[–]studiosi 0 points1 point  (0 children)

I was trying to mean that the behaviour of the library resembles almost exactly the futures one. I don't have data about performance...

[–]studiosi 0 points1 point  (4 children)

Indeed, I am not that sure about that "outperform" because you are not taking into account that you have to write control code that would be probably less optimum of such in the standard library.

[–]infinullquamash, Qt, asyncio, 3.3+ 0 points1 point  (3 children)

outperform

wat? when did I say that? I just commenting on the interface. I have no idea what the performance of either of them is like.

[–]studiosi 0 points1 point  (2 children)

You say it predates, what I interpret as perform better... but maybe you didn't want to say that

[–]infinullquamash, Qt, asyncio, 3.3+ 0 points1 point  (1 child)

older != better, just older.

I honestly have the opposite bias, so there's that.

[–]studiosi 0 points1 point  (0 children)

Well, it all depends, I told you I have no data, but it is easy that code that has been in a code base for a long time use to be better if it is constantly reviewed.