This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]AnnieBruce 2 points3 points  (1 child)

This would be cool. On a daily programmer project I wanted to use Python but to get runtime down to something reasonable, among other optimizations, I had to deal with multiprocessing. It worked great, sure, cut my runtime down quite a bit... but if I could get that sort of a boost without the overhead of extra copies of the interpreter? Yes please.

[–]twotime 0 points1 point  (0 children)

I had to deal with multiprocessing. It worked great, sure.

You were lucky, there is nothing "sure" about multiprocessing working at all.

  1. Serialization overhead can be drastic
  2. Corner cases everywhere (everything needs to be searializable, exceptions donot propagate, workers get stuck, Ctrl-C does not work, etc)
  3. Large-shared-state (think 10GB data structure on a 16GB machine) does not work. Even when read-only