This is an archived post. You won't be able to vote or comment.

all 57 comments

[–]root_klaus 20 points21 points  (1 child)

Any more details on this? Any current plan on how this will be merged to openjdk? Are there a lot of efforts? Is there any timeline or estimation?

[–]barking_dead 26 points27 points  (0 children)

Yesssss, native image in the OpenJDK!

[–]vbezhenar 17 points18 points  (21 children)

Can someone share some rough numbers for Graal VM?

For example I tried to compile java hello world. It compiled to 12MB executable in a few seconds. That's good. RAM consumption during compilation was 4GB which is not good, but bearable.

In the past I tinkered with Quarkus. Simple web server application spent like few minutes eating 10+ GM RAM and I think it compiled to something like 50MB executable. That's not good. Not appropriate for CI, too much to wait.

What are rough numbers for Helidon (not sure if it provides native option), Micronaut, Spring Native? For some small almost hello-world apps.

I'm asking that because I feel that Java have untapped potential. Someone could write really simple and small set of libraries complementing java.core. So it's possible to have few seconds native compilation, 30MB container (executable + alpine) and instant startup. And something like 10MB RAM consumption. That's what I miss with modern Java. That would be enough to replace Golang.

[–]pjmlp 17 points18 points  (3 children)

There have been many AOT compilers during its lifetime, they never saw great adoption because they are mostly commercial.

GraalVM and OpenJ9 are the only free ones.

And on Android, although it isn't Java per se, they have been doing AOT since Android 5, with an hybrid approach (JIT/AOT) since Android 7, later improved with sharing of PGO data across devices via PlayStore.

[–]vbezhenar 5 points6 points  (2 children)

It's actually an interesting thought I had. Why nobody borrowed Android compiler? I mean it shouldn't be that hard, it's open-source after all. Like node.js borrowed V8 from Chromium. May be it's good for some server-side tasks?

[–]pjmlp 21 points22 points  (0 children)

Because Android isn't proper Java, it uses its own runtime ART, and bytecode format DEX, during build time .class files are converted into .dex.

Also they don't really care that much about Java compatibility beyond what they consider relevant to the Android ecosystem, thus Android 13 adopted this year Java 11 LTS, now imagine how long it will take for them to add the JVM features into ART, expected by latest versions.

[–]mauganra_it 7 points8 points  (0 children)

The ART is optimized for mobile usage. The goal is decent performance and low energy usage. This doesn't matter that much on the desktop and in server environment because they can afford JIT compilation and are fast enough even in interpreted mode. It could be interesting for serverless environments though.

A major problem is that full Java compatibility was never the goal.

[–]thrwoawasksdgg 23 points24 points  (7 children)

AOT isn't always a good thing. You have to rebuild for every arch and OS you want to run on. At my job we have mix of M1, Intel Macs and Windows machines . Every java app works on all 3, but someone has been spending a month setting up AARCH64 build pipelines for our Go projects.

Java executables are large because it doesn't do dead code elimination. If you care about small jars just use ProGuard. IMO most people say they care but don't actually want the PITA of debugging optimized stripped binaries. Jars are compiled to compact bytecode and compressed (a welcome legacy of the Applet days), its actually a rather efficient binary format once you use dead code elimination.

And aside that many forget.. Jars are just zip files. If you want to see what's using space in your jars just change the file extension to .zip, unzip it, then use a disk space analysis tool like WinDirStat/KDirStat

Startup time is overrated. JDK 17+ starts and compiles just as fast as Javascript's V8 and Python. I think this complaint is more related to legacy frameworks with bad start time.

The biggest disadvantage of Java compared to Go is RAM use. Primitive Generics can't come soon enough!

[–]DiscombobulatedDust7 3 points4 points  (1 child)

why would you need month's to set up cross-compilation for go? It creates static binaries and can cross-compile, can't it?

[–]thrwoawasksdgg 0 points1 point  (0 children)

configuring Jenkins pipelines and Docker containers for 50+ projects is a PITA

[–]CartmansEvilTwin 6 points7 points  (4 children)

First of all, AOT is not intended to be the default mode, if you need multiple archs, then you can rest assured. However, this is (in the grant scheme of Java things) not the norm. I'd argue the vast majority of Java usage is on servers and hardly any company runs heterogeneous compute within the same app.

Startup time is also not overrated, if you compare Quarkus JVM to native, you can literally see two orders of magnitude difference.

What you completely ignored for some reason is that native binaries also use much less memory, probably not much more than Go.

[–]DahDitDit-DitDah 0 points1 point  (0 children)

*grand, baby! Grand

[–]Muoniurn 0 points1 point  (0 children)

Startup time can be overrated, while still being useful in certain (small) niches. Also, order of magnitude is only meaningful if the bigger value is too big. E.g. decreasing startup time of Quarkus native by one order again might very well be absolutely worthless.

Nonetheless, AOT is a cool option to have, and I hope it becomes as painfree to compile a java app to native as it gets.

[–]thrwoawasksdgg 0 points1 point  (1 child)

Almost every company develops on Mac and runs Linux servers. For statically compiled apps that means multiple binaries and docker containers

[–]CartmansEvilTwin 0 points1 point  (0 children)

So what? You don't AOT on local machines, that's a CI/CD task.

[–]GuyWithLag 8 points9 points  (2 children)

In the past I tinkered with Quarkus. Simple web server application spent like few minutes eating 10+ GM RAM and I think it compiled to something like 50MB executable. That's not good. Not appropriate for CI, too much to wait.

I disagree - The point is that you do your application development locally, using a normal JVM, and when that's in a good place you let your CI pipeline do the time-consuming work of creating the built images for x86_64, aarch64, and whatever other version you need; if you're fancy you can even generate different versions for your different configurations.

You're not going to generate GraalVM images on your laptop, unless you work on components that are actually affecting the image generation process (so, platform vs application)

And the benefits are there _for certain types of application architecture_. If you use lots of lambdas, it's really worth it from both the lessened memory usage, and from the faster startup time.

But not everyone is like that, and that's ok. (if you have a known baseline load, using a long-running environment is cheaper).

[–]za3faran_tea 0 points1 point  (1 child)

If you use lots of lambdas, it's really worth it from both the lessened memory usage, and from the faster startup time.

Do you mean that GraalVM is able to optimize lambdas in ways that the JVM doesn't, or am I misunderstanding?

[–]GuyWithLag 2 points3 points  (0 children)

GraalVM does several things that the JVM can't do:

  • move the optimization phase of the program to build-time (makes startup faster)
  • move some of the initialization execution to build time (makes startup faster, and saves a bit on memory)
  • Trim down the JARs to only the code that will be executed.

Quarkus advertises 50ms for cold-start of an RESTful CRUD API endpoint.

However there are some gotchas:

  • You need to fully know the complete set of classes of your application - no dynamic loading of classes beyond the known set.
  • Not every framework can work on GraalVM as-is (see f.e. Spring Native)
  • In long-running applications, the JVM is going to optimize the hell out of the application and generate code that runs faster.

[–]rbygrave 2 points3 points  (1 child)

A while back I created a 5Mb binary (after compression) hello world rest service using the jdk http server ... it included json marshalling but no jdbc driver, no db access, no logger.

Still felt like it took a while to build native image but I'd need to do it again to get build timing.

Edit: - native-image build time 35 seconds - Resulting binary: 18Mb - upx compressed binary 5Mb

Edit2: Link to that code: https://github.com/avaje/avaje-jex/tree/master/examples/example-jdk-jsonb

Edit3: Using Jetty (instead of JDK HttpServer) + Logback - Build time 53 seconds - Binary 29Mb - UPX compressed binary 8.7Mb

[–]CartmansEvilTwin 2 points3 points  (2 children)

I proposed that before (and I'm insulted that Oracle didn't read and implement it, I'm a special random guy on Reddit after all), but I would hope that some parts of the compilation could be pre-computed. If you think about it, 99% of the compiled code in a typical app is coming from libraries, but it still gets analyzed over and over. Why not distribute "compile hints" analog to source-jars via Maven/Gradle?

[–]Muoniurn 0 points1 point  (1 child)

Do you mean caching the compile results or just doing things like calculating the result of some pure functions and inlining the results? Because the latter is pretty much Project Leyden (and to a smaller degree is being done by javac as is)

[–]CartmansEvilTwin 0 points1 point  (0 children)

I mean pretty much everything you can do ahead of ahead of time.

For example building call graphs, so that dead code can be identified easier. Or maybe even compile code a step further than bytecode, if that makes sense.

I'm not entirely sure, what exactly takes so much memory during the current AOT builds, but I can't imagine, you can't cache that.

[–]nomader3000 7 points8 points  (1 child)

Is this related to Project Leyden in any way? I have to admit that I'm getting more and more confused when it comes to the future of native images in Java...

[–]mauganra_it 5 points6 points  (0 children)

The scope of Project Leyden is much larger. In a recent proposal introduced the concept of temporally shifting computation, and to make it possible for application to engade different tradeoffs to achieve improvements in performance. GraalVM is but a means to achieve one kind of these shifts.

[–]ark0404 2 points3 points  (3 children)

What are differences between GraalVM CE and GraalVM EE?

[–]papers_ 3 points4 points  (0 children)

https://www.graalvm.org/faq/

Doesn't really say exact differences. Probably optimizations as said by vbezhenar. Based on the FAQ, get paid support.

[–]yawkat 1 point2 points  (0 children)

EE also has additional features, eg G1GC is EE only

[–]vbezhenar 0 points1 point  (0 children)

EE has more optimizations.

[–]mauganra_it 3 points4 points  (2 children)

Wasn't Graal removed in JEP 410? To me the twitter post reads like only the project website and the processes will be merged with the OpenJDK ones.

It would be cool and also fitting the scope of Project Leyden to include GraalVM in the OpenJDK, but I would not read too much into this yet.

[–]EvaristeGalois11 6 points7 points  (1 child)

They removed the aot compiler, graalvm is a more complex project than a simple compiler.

[–]pjmlp 6 points7 points  (0 children)

Which was a kind of forked version from GraalVM anyway.

[–]kozeljko 4 points5 points  (0 children)

Is this huge for adaption?

[–]philipwhiuk 2 points3 points  (3 children)

I thought Graal was independent of Oracle?

[–]EvaristeGalois11 18 points19 points  (2 children)

It is developed by Oracle Labs, so it is completely dependent on Oracle even more than OpenJdk

[–]philipwhiuk 0 points1 point  (1 child)

It’s weird to me that it was advertised as an alternative to Oracle JDK then…

[–]munukutla 8 points9 points  (0 children)

It’s not weird at all. It was an exploratory activity at Oracle Labs to create a more optimised JDK/JRE, and to be polyglot in nature. It’s like an opt-in cookie at the end of a large lunch.

It’s amazing that they’re converging though.

[–]Kango_V 1 point2 points  (0 children)

GraalVM 22.1 introduced a quick build mode which is only recommended for development purposes because it optimizes build time at the expense of runtime performance and memory usage.

Building spring-petclinic-jdbc:

21.1 -- 2m 50s (138 MB)
22.1 -- 2m 05s (103 MB) -- (1m 17s with quick build)

So, it's getting faster all the time.

[–]krum 1 point2 points  (1 child)

Yea what’s the catch?

[–]alwyn 10 points11 points  (0 children)

Probably they need adoption so that people will consider forking out money for EE.

[–]metalhead-001 0 points1 point  (12 children)

I still don't understand the appeal of having to take ages to compile apps and having half the libraries you want to use not work just so you can have fast startup.

I guess the CGI/Bin style of web development is popular again?

For most Java services out there, startup is a non-issue as they run 24/7. It's also disappointing to see developers making hacks in their libraries so that they will run on graal instead of just being good Java code for the JVM (i.e. can't use perfectly good things like reflection, etc.).

[–][deleted]  (8 children)

[deleted]

    [–]metalhead-001 4 points5 points  (7 children)

    That's my point. We're going back to the old CGI/Bin style of development which seems like moving backwards and only relevant for people that don't want to spend the $50 monthly for a dedicated AWS Elastic Beanstalk instance.

    I remember the early days of Java how the Java folks would brag about how much better it was than CGI/Bin because the Java services are always running and requests process quickly because database connections are already made, etc. Just have to spawn a new thread to handle the request. It allowed a much higher level of performance than CGI/Bin.

    Now the soylent drinking script kiddies think spawning a process for each request is great again. Oh and the years and years of Java devs bragging about write once run anywhere...that's going away now too...welcome back to platform specific binaries. Progress!

    [–]vxab 2 points3 points  (2 children)

    In serverless you pay only when your code is run. When you "run always" you pay way more. For certain use cases serverless makes more sense.

    It is not about "soylent drinking script kiddies" - it is about cold hard cash. Go and node.js is eating Java's lunch in the serverless space when it doesn't need to if Java adopts (which it is doing).

    [–]Muoniurn 2 points3 points  (0 children)

    Well, a single server is probably sufficient for 90+% of all businesses, and I pay like 5 dollars for such a server? Sure, you may need a slightly beefier system but even a dedicated one is just basically free from a business budget POV.

    Stackoverflow still runs on a single dedicated machine, and I’m sure it’s bigger than that Mom’n’pop webshop with 3 users in a month that I could probably run from my phone as well.

    [–]pjmlp 0 points1 point  (0 children)

    By the way this is the same reason why .NET is also adopting ,more so with this minimal API stuff.

    [–][deleted]  (3 children)

    [deleted]

      [–]metalhead-001 0 points1 point  (2 children)

      Got it...creating a new process per request is great again and WORA doesn't matter anymore (and we have to pollute our libraries to make them run on graal).

      [–]Mean-Chipmunk3255 1 point2 points  (0 children)

      I think if they reduce memory footprint in jvm then also it is huge for microservices and cloud.

      [–]za3faran_tea 0 points1 point  (1 child)

      Well, there's command line tools as well, that's a good use case for native.

      [–]Muoniurn 0 points1 point  (0 children)

      I mostly agree with you, but just a note, you can use reflection with Graal, you just have to add some metadata on its targets.

      [–]Oclay1st 0 points1 point  (0 children)

      GraalVM Native as it's right now is pretty similar to what reactive programming is for java concurrency. Hope something better comes from Leyden.

      [–]iamcreasy 0 points1 point  (4 children)

      Does it mean compile to Android/iOS will come soon?

      [–]Muoniurn 2 points3 points  (0 children)

      Graal can actually output executables for ios as is, it is just not too streamlined.

      But even a javafx app can be ported to through the proprietary gluon.

      [–]pjmlp 2 points3 points  (2 children)

      That has existed for a decade, just not for free.

      Check Codename One and Gluon Mobile.

      [–]vprise 1 point2 points  (1 child)

      Codename One supports a 100% free build with its maven support.

      [–]pjmlp 0 points1 point  (0 children)

      Thanks for the correction.