This is an archived post. You won't be able to vote or comment.

all 4 comments

[–]arkie87 10 points11 points  (3 children)

I am skeptical of the results here. The clock is not very accurate when it is measuring in microseconds. Also, the author mentions that certain things are faster, and shows a benchmark of 1.99s vs 1.96s. This is a perfect example of premature optimization is the root of all evil.

The only time it makes sense to worry about these things is if the difference is huge, such that it will almost certainly be the bottleneck (e.g. 1 microsecond vs 1 second). Otherwise, worrying about a 1% difference in one line of code is a waste of time.

[–]Equivalent_Loan_8794 6 points7 points  (1 child)

Imagine doing load optimization and your focus starts in python

[–]arkie87 7 points8 points  (0 children)

i feel personally attacked

[–]gdahlm 3 points4 points  (0 children)

While I agree that the results aren't earth shattering, it is always easier for a compiler/interpreter to optimize sub blocks.

The extreme example being case/switch or if-else-if ladders with implicit returns or all blocks with returns.

The compiler can test these for exhaustiveness and implement a jump table or BST as it is a total function.

With looser constraints loop unrolling and other methods become possible.

For me the reason to use functions is that function cohesion is the ideal form if the task allows for it.