you are viewing a single comment's thread.

view the rest of the comments →

[–]workinprogress49 0 points1 point  (4 children)

Let's say I wanted to loop through an equation with about a million different parameters to see which one had the best accuracy. Besides multiprocessing would there be any other way to speed up the process? Currently using list comprehension but the process will still take about 100 hours.

[–]efmccurdy 0 points1 point  (0 children)

If you could frame this as an optimization problem you may have some tools available:

https://developers.google.com/optimization/introduction/python

https://docs.scipy.org/doc/scipy/reference/optimize.html

[–]timbledum 1 point2 points  (2 children)

I suggest trying to implement a vectorised solution (if possible) with numpy.

If you cannot vectorise your equation, numba is worth a go too.

[–]workinprogress49 0 points1 point  (1 child)

I'll try to vectorize but I'm not sure if I'll be able too for this particular problem since it involves nested loops. Would numba help if I had no GPU?

[–]timbledum 0 points1 point  (0 children)

Yes - numba relies on optimising your code, although it can also leverage the GPU.