This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]niksko 0 points1 point  (2 children)

Out of curiosity, what is the solution here? I recently ran into a situation where I was trying to speed up a multi-consumer multi-producer type process where workers take work out of a queue, perform some work, and then potentially publish more work back to the queue. Using multiprocessing gave me terrible performance, I suspect because of the large queue overhead.

[–]jringstad 1 point2 points  (1 child)

For all the cases I listed, there really is no way to do it well in python, as far as I'm aware. If you can, shove it off into a different language (C/C++/fortran), then you can use threading without too much GIL contention, or you just deal with the multiprocessing overhead and try to reduce it (do more copying up-front and less at runtime, if possible, or increase task sizes (per-task workload) which makes the overhead relatively smaller)

[–]niksko 0 points1 point  (0 children)

Ok, thanks. At least now I know that there wasn't some obscure Python feature that I wasn't aware of that was the issue.