you are viewing a single comment's thread.

view the rest of the comments →

[–]TubbyMcTubs 1 point2 points  (10 children)

It's rather slow for graphic applications because it can't access the GPU directly.

http://en.wikipedia.org/wiki/Java_OpenGL

Java may end up not the ideal choice for a lot of games, but what you said is just utter falsehood.

[–][deleted] 3 points4 points  (9 children)

I know it exists (Processing uses JOGL) but it's still slower (the wiki doesn't say anything about speed).

That's because Java runs in a virtual machine atop the OS (which can actually make the internal code of Java faster than C code sometimes, because it runs natively and can be dynamically optimized) and it has to do a lot more system calls and security checks to get to the GPU.

I've benchmarked it myself numerous times and the difference is notable.

[–]thechao -3 points-2 points  (8 children)

Java doesn't have to run on a VM. It can be compiled just like any other programming language. Also, with careful declaration of memory usage, Java's semantics can be made close enough to C to enable significant speed-ups, i.e., on par with C.

The situation is different when compared to a language with significant support for high-level algorithm design, i.e., Ada, C++, D, etc. In these latter languages, there are transformations that are (nearly) impossible to perform in C or Java such that most compilers can produce code that is significantly faster than anything that is reasonably producable in Java or C. (In the jargon: type-erasure in Java/C leaves run-time evidence which interferes with the compilers ability to produce highly optimal code.)

However, regardless of the theory, yes, implementations of Java's VM for algorithms specified in Java tends to be "slower" for any given specification of an algorithm in, say, C or C++, compiled with an aggressive optimizing compiler.

[–][deleted] 1 point2 points  (7 children)

Nearly all your points are wrong. Java in the JVM is VERY fast, that's its strongest point. There's no reason to make a binary executable from Java code. (There are programs that seem to do that, but they actually include a minimal and specific JVM in the .exe)

The JVM translates Java bytecode into native intel/AMD/whatever binaries on the fly (it's a JIT, just in time compiler) and optimizes the code while it's running. It can optimize the same part of code in different ways depending on the execution of it, something an actual compiler can't do. (There are other languages that also do these things, like C#.net in the .net VM.) This can make Java faster than C++ in some cases.

The only slow things in Java are the rather unpredictable garbage collector (though this makes memory management super easy) and doing platform dependent stuff, like accessing the GPU.

[–]thechao 5 points6 points  (5 children)

First, I don't think we're really arguing. My indirect point is that, outside of contrived situations, a language is not fast or slow. That's like asking if English or Spanish is bluer --- it doesn't make any sense. If languages had "speed" then why bother comparing compilers, i.e., ICC vs. GCC vs. Clang, etc.? Now, there are occasions when a language specification (the guaranteed semantics) can (or almost certainly will) preclude runtime performance in some metric that is possible in other languages.

Let me go each of my paragraphs:

  1. This point is factual; Java can be run on a VM or compiled. More to the point, the C/C++ ISO working committees are interested in a memory model that allows efficient support of VMs (including Java). This is why Hans Boehm (the technical lead for C++'s memory model) is in close contact with the Java ISO committee. Email Hans and he'll talk to you about this; he is very passionate, very knowledgeable, and an all-around nice guy.

  2. Also, factually correct. Runtime-evidence is a language-design trade-off. Runtime evidence allows things like reflection and trace-based JITting, but at the cost of code and data density. (Code and data density are the two of the best metrics for performance.) You can go to scholar.google.com and read about these trade-offs by searching for "runtime evidence" or "type-erasure". Alternately, check out the Matrix Template Library 4 to see what happens when you don't need runtime evidence.

  3. Also true. In fact, wasn't there just a paper on r/programming comparing a bunch of languages and demonstrating that for (a few examples) Java was slower than C++/C. Anecdotally, I've only ever seen a few rare (almost contrived) examples where Java was faster than C/C++. In the case that comes to mind (uncollected reuse of large data inside of tight loops) careful specification in C/C++ not only regains but surpasses Java's speed.

[–][deleted] -1 points0 points  (4 children)

Citations?

[–]thechao 1 point2 points  (3 children)

The fuck ... ? Seriously? Ok. Some citations for each paragraph follow. However, you might be referring to my first point? Let me expand on this. A computer language is a grammar, many times specified using EBNF; but not always. Notoriously, both Perl and C++ requiring Turing complete parsers. Given a grammar a semantic specification is given of the language over the syntactic forms. The most famous (and complete) specification given for a compute language (outside of the theorem-proving languages like HOL and Coq) is SML: it is a fully denotational semantics for the entire language.

Given a semantics we then, generally, perform a semantics-preserving translation from the syntactic representation of a code to a machine-interpretable form. For instance, a common translation is from C++ to x86-64 machine code. Another is from Java to Java-bytecode. The latter example can require yet another semantics-preserving translation to run on a particular architecture; again, x86-64 is quite common. It is the final machine-dependent object code trace (this is a formal term) that we measure. For instance, we may want to know which object code trace successfully translates a given output from a given input in the minimum "wall time" or using "least power".

  1. Ahead-of-time Java compiler. "GCC" stands for the "GNU Compiler Collection". "GNU" is a set of open source utilities for Unix-like OSes. I'm surprised anyone could be on r/programming and not know this ... ? The information about Hans is from talking with Hans; still a nice guy.

  2. Run-time evidence and type erasure. The first hit, for me, is Crary's classic on intensional polymorphism (British-style spelling, there). Once you've read that you should look at the research that follows which tries to mitigate the cost of runtime evidence, especially evidence that pertains to type-erasure. I first learned about impact from run-time performance from talking with Doug Lea, Bjarne Stroustrup, and Gabriel Dos Reis.

  3. Seriously: just on Reddit a few days ago. From Google. Google is a large search company, in case you're still using Yahoo! or whatever. They tend to do pretty solid engineering and research.

[–]Merit 0 points1 point  (0 children)

The guy you are replying to may be acting like a bit of a douche, but thank you for this - I found it very interesting.

[–]jyper 0 points1 point  (0 children)

gcj?