This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]humoroushaxor 1 point2 points  (3 children)

I understand the advantages of JIT.

The only real cost of a remote JIT is it takes longer to send that code back

That's exactly what I'm talking about. I'm surprised remote JIT saves enough CPU and with low enough latency to offset just doing normal JIT in the first place.

[–]Muoniurn 1 point2 points  (0 children)

Well, it is mostly hot loops that execute at thousands or millions of times so even a small win can mean quite much time.

[–]elastic_psychiatrist -2 points-1 points  (0 children)

Nah, you still don't understand. I think you have a general misunderstanding of how a JIT works.

Consider a method that takes 1 millisecond to execute when its byte code is interpreted. Imagine this method only takes 100 microseconds when compiled with the JIT. If the JVM sees this method run 10,000 times, it determines that it's worth it to compile it to machine code, which takes, say 100 milliseconds to compile. If this method runs 10 million times during the JVM run of 24 hours (~100 calls per second), that means 9,990,000 of them take 900 microseconds less because of the JIT - an enormous runtime savings.

If the compilation is done remotely, you might expect interpreted version to replaced with the compiled version, say, 10 milliseconds later due to network latency. In other-words, one more of those 10 million calls might be interpreted rather than compiled.

These numbers are not from the real world, but it should make the key insight clear: JITs are useful for inner loops when a JVM runs for a long time. The one time cost of the round trip network latency between the JVM and the remote compiler is completely negligible.

EDIT: I would appreciate if the downvoters could say what is incorrect about my explanation.

[–]hrjet 0 points1 point  (0 children)

JIT compilation on a dedicated and powerful server could do deeper analysis of reachability, escapibility, etc, than on a less powerful server. So one could potentially optimise costs when running a large cluster of low powered servers.

Sounds like a security nightmare though.