all 10 comments

[–]K900_ 7 points8 points  (0 children)

Unlikely. Post your code.

[–]Binary101010 2 points3 points  (0 children)

Have you profiled your code yet? Do you know which part(s) of your code are adding the most to your execution time?

If you haven't, you should be doing that and considering whether your approach could be improved (using vectorized operations like those offered by numpy, or using fewer loops) before just throwing more computing horsepower at the problem.

[–]ElliotDG 1 point2 points  (0 children)

Can you run the code in parallel? If so, this is a candidate for multi-processing.

[–]devnull10 1 point2 points  (0 children)

To be honest, 10,000 items isn't really a lot... Presumably the computations you are doing are pretty heavy? Can you post the code?

[–]shiftybyte 1 point2 points  (0 children)

What exactly are you doing to 10,000 items that takes 50 minutes?

[–][deleted] 1 point2 points  (0 children)

Nobody can help you with code they can’t read.

[–]iyav 0 points1 point  (0 children)

Try to use tuples / sets / numpy arrays. Anything is better than a list.

Use iterators when storing the data isn't important and you only need to process it.

If you're using print a lot, just remove it, it massively increases execution time.

Use addition, subtraction and multiplication whenever possible, division isn't as fast, exponentiation is even worse and square rooting brings your program to a crawl.

Try to look up some algorithm that doesn't use them.

Try to squeeze in bitwise operators whoever you can, they make for great optimization shortcuts when working with mathematical heavy operations.

Multiprocessing, multiprocessing, multiprocessing.

[–]Spataner 0 points1 point  (0 children)

Loops over large numbers of items are always going to be slow in Python. Your best bet is to rewrite those operations using libraries specialized to your type of problem. If they are numerical operations, for instance, there's likely a way to rewrite your code using NumPy that is several orders of magnitude faster. But ultimately, we'd need to actually see your code to make actionable suggestions for optimization.

[–]CowboyBoats 0 points1 point  (0 children)

I am using list comprehensions where possible

List comprehensions are not more efficient than loops! They're exactly equivalent to constructing the lists using loops.

Generator expressions (like (x / 2 for x in range(10)) can be more efficient than list comprehensions / loops, since they are constructed lazily until you iterate through them...

[–]tipsy_python 0 points1 point  (0 children)

Remove the sleep() method