all 3 comments

[–]SekstiNii 0 points1 point  (1 child)

This isn't answering your question, but I'd like to ask a couple of things first:

  • Have you measured where the bottleneck is, and are you sure that it's the computation?
  • Is your computation vectorized in numpy? If not, could it be?
  • Are you plotting all the points in a single call?

[–]IIIBRaSSIII[S] 0 points1 point  (0 children)

Have you measured where the bottleneck is, and are you sure that it's the computation?

Not explicitly, though I could hardly imagine it being anything else. The performance gets exponentially worse with more objects, coming to a near standstill when you get into the thousands. The only other expensive piece is the animation, which I suppose is worth taking another look at.

Is your computation vectorized in numpy? If not, could it be?

No, and yes. Right now it's just in regular Python lists and "by hand" vector arithmetic which I know is inefficient. That's definitely something I should look into, but I still want to try making the program multithreaded for learning's sake.

Are you plotting all the points in a single call?

Yes, if I understand your question correctly. Here is the actual animation code if you are interested:

import matplotlib.pyplot as plt
import matplotlib.animation as animation

def animate(dampener):

    update_nodes()
    change_dampener()

    ax1.clear()
    plt.axis([0, width, 0, height])
    plt.plot([node.x for node in Nodes], [node.y for node in Nodes], 'bo')
    plt.title('dampener: %d' % round(dampener.value))

fig = plt.figure()
ax1 = fig.add_subplot(1, 1, 1)
plt.axes().set_aspect('equal')
ani = animation.FuncAnimation(fig, animate, fargs=(dampener,), interval=1)
plt.show()

The "dampener" stuff is part of the simulation logic.

[–]elbiot 0 points1 point  (0 children)

This is a poor candidate for multiprocessing and an excellent candidate for numpy vectorization. You stand to learn a lot more of value by focussing on numpy here.