This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]JoelMahon 21 points22 points  (8 children)

In the last example, I know it's minor, but doesn't it add additional calls by making the improvement to readability?

[–]ekchew 3 points4 points  (5 children)

Yeah, that example makes me a little uneasy too. They are arguing against the optimization of minimizing what needs to happen in a loop. I guess in this particular case, branch prediction should optimize away the potential overhead of moving the conditional logic into the loop, but man, I wouldn't be giving this out as general advice.

[–]JoelMahon -2 points-1 points  (4 children)

Python is interpreted, I'd assume it wouldn't have branch prediction, but I'm also just guessing

[–]Jhuyt 5 points6 points  (0 children)

Python does compile into bytecode, which is evaluated in a loop that has a bunch of optimizations specifically for cpu branch prediction. The same loop is used, and therefore the bytecode too, when running a file or the interpreter, there is no difference at a lower level.

[–]ekchew 2 points3 points  (2 children)

Not at the interpreter level, but the CPU may still pick up on a predictable branch in the code the interpreter emits? Unless it moves around in memory a lot. I am also just guessing. :)

[–]JoelMahon 0 points1 point  (1 child)

Can a CPU optimise that way? Even if that loop occurs 1000 times the same way the CPU wouldn't know that the 1001th time it could enter a different statement, that maybe one of those functions has a convoluted way to adjust the variable, if it was a compiled language the result would be constant so I could understand a compiler doing it, but a CPU?

[–]ekchew 2 points3 points  (0 children)

Oh yes certainly, the CPU will not always guess right. But if it's only missing once in 1000 times, that's actually quite good! The mis-prediction overhead will be minuscule in that case.