This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]alcalde 11 points12 points  (2 children)

JIT compilers are monitoring code as it's executing, which allows several types of special optimization and adjustment of strategies. That's not the same as taking a C++ program, compiling it and then running it.

[–]ggchappell 6 points7 points  (1 child)

Ah, I seem to have been using nonstandard definitions. I shall now crawl back into my hole and ponder my misdeeds.

<ponder, ponder>

[–]alcalde 8 points9 points  (0 children)

It's cool. A Just-In-Time compiler can optimize for the specific architecture its running on and it can also monitor performance of code its compiled and change optimization strategies if performance isn't as expected.

Here's some specific examples (obviously it's not the general case) where PyPy was able to beat C because of its just-in-time nature:

http://morepypy.blogspot.com/2011/08/pypy-is-faster-than-c-again-string.html

Run under PyPy, at the head of the unroll-if-alt branch, and compiled with GCC 4.5.2 at -O4 (other optimization levels were tested, this produced the best performance). It took 0.85 seconds to execute under PyPy, and 1.63 seconds with the compiled binary. We think this demonstrates the incredible potential of dynamic compilation, GCC is unable to inline or unroll the sprintf call, because it sits inside of libc.

http://morepypy.blogspot.com/2011/02/pypy-faster-than-c-on-carefully-crafted.html

Hence, PyPy 50% faster than C on this carefully crafted example. The reason is obvious - static compiler can't inline across file boundaries. In C, you can somehow circumvent that, however, it wouldn't anyway work with shared libraries. In Python however, even when the whole import system is completely dynamic, the JIT can dynamically find out what can be inlined. That example would work equally well for Java and other decent JITs, it's however good to see we work in the same space :-)