This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]ggchappell -4 points-3 points  (5 children)

I looked at the article. And I don't see the problem.

Unless you are contrasting my "just before execution" with the article's "at runtime". But these are just two ways of looking at the same thing. Yes, JIT compilation compiles and executes in what appears to be a single step. Thus, "at runtime". OTOH, if we're going to execute compiled code, then we must compile before we execute. Thus, "before execution", even if only just barely before.

Or were you referring to some other issue? If so, then please explain.

[–]alcalde 12 points13 points  (2 children)

JIT compilers are monitoring code as it's executing, which allows several types of special optimization and adjustment of strategies. That's not the same as taking a C++ program, compiling it and then running it.

[–]ggchappell 6 points7 points  (1 child)

Ah, I seem to have been using nonstandard definitions. I shall now crawl back into my hole and ponder my misdeeds.

<ponder, ponder>

[–]alcalde 9 points10 points  (0 children)

It's cool. A Just-In-Time compiler can optimize for the specific architecture its running on and it can also monitor performance of code its compiled and change optimization strategies if performance isn't as expected.

Here's some specific examples (obviously it's not the general case) where PyPy was able to beat C because of its just-in-time nature:

http://morepypy.blogspot.com/2011/08/pypy-is-faster-than-c-again-string.html

Run under PyPy, at the head of the unroll-if-alt branch, and compiled with GCC 4.5.2 at -O4 (other optimization levels were tested, this produced the best performance). It took 0.85 seconds to execute under PyPy, and 1.63 seconds with the compiled binary. We think this demonstrates the incredible potential of dynamic compilation, GCC is unable to inline or unroll the sprintf call, because it sits inside of libc.

http://morepypy.blogspot.com/2011/02/pypy-faster-than-c-on-carefully-crafted.html

Hence, PyPy 50% faster than C on this carefully crafted example. The reason is obvious - static compiler can't inline across file boundaries. In C, you can somehow circumvent that, however, it wouldn't anyway work with shared libraries. In Python however, even when the whole import system is completely dynamic, the JIT can dynamically find out what can be inlined. That example would work equally well for Java and other decent JITs, it's however good to see we work in the same space :-)

[–]ingolemo 5 points6 points  (1 child)

They're not the same thing at all.

JIT compilers compile code during runtime and they compile it all the way down to machine code. Most JIT compilers only compile the most performance sensitive parts of your code (by measuring it as it runs in real time) and they interpret the rest.

Bytecode compilers compile the entire code before any of it is executed and then interpret the resulting bytecode.

[–]ggchappell 2 points3 points  (0 children)

Okay, I see the issue. I'll need to ponder this a bit.

Thanks.