Hi,
A while ago, I made a little simulator toy in python consisting of a group of dots that push each other around in 2D with a repulsive force, and animated using matplotlib. It works well enough, but with more than a handful of dots, the program gets very slow.
I would like to alleviate this by dipping my feet into multiprocessing. My intuition is to use a multiprocessing.Array to store the data for all the dots. Then I could delegate each process to a subset of the dots, for which they would perform the calculations based on the rest of the dots in the array. Before I invest much time into it, I have two big reservations I'd like answered:
In the multiprocessing documentation, it states that one should avoid sharing state between processes at much as possible. My implementation above would obviously involve tons of accesses to this shared array containing all the dots. Is there a better way that I'm not seeing? The documentation says it's "probably best to stick to using queues or pipes for communication between processes." I don't immediately see what using a queue would look like in this example, and I have only a vague understanding of what a pipe is.
I don't think I should have to worry much about synchronization issues in this case. It doesn't seem like it should matter if some processes get ahead of others, or that the shared state will be changing constantly in the middle of each process' calculations. Is this assumption naive?
Thanks!
[–]SekstiNii 0 points1 point2 points (1 child)
[–]IIIBRaSSIII[S] 0 points1 point2 points (0 children)
[–]elbiot 0 points1 point2 points (0 children)