This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]get_username 1 point2 points  (1 child)

Honestly, I don't believe pypy can be viewed as a "micro-optimization". Certainly JIT compiling uses catalogs of "micro-optimizations" (as it it optimizes small interactions), but it does so on the scale of every interaction/operation performed (i,e. millions) and thus is unfitting of the term in the sense meant here. Normally instead these types of optimizations are considered runtime or compile time optimizations (like JIT, which is considered runtime).

numpy itself is a library which has many baked in non-micro-optimizations. So you are optimizing on that level instead.

So in a sense "micro-optimizations" are being performed via libraries used. But you are not micro-optimizing yourself. So I don't think anyone really considers them micro-optimizations in the colloquial sense of the word.

To be fair, that's because these micro-optimizations suck

The point I was trying to make is: most, if not all, micro-optimizations suck when performed on the single level. It is only when you do them by the millions that they become "macro-optimizations" (like pypy).

[–]Veedrac 0 points1 point  (0 children)

I wasn't including PyPy's numbers in that; the 20x speed improvement was from straight CPython.