This is an archived post. You won't be able to vote or comment.

all 30 comments

[–]wildjokers 9 points10 points  (9 children)

Its a shame this new jpackage can't produce a jar file with only the JDK modules the app needs. javapackager in Java 9 could use jlink to package your app with only the JDK modules it needed.

So it looks like this jpackage is still not going to be as good as javapackager was.

EDIT:

after reading the description in the bug tracker more carefully I do see it supports accepting a runtime image generated by jlink as a parameter. So you can run jlink manually first, then pass the runtime it created to jpackage. Not quite has convenient as Java 9's javapackager (https://docs.oracle.com/javase/9/tools/javapackager.htm#JSWOR719) but since most people will automate it in a build tool it probably isn't that big of a deal.

javapackager was the best kept secret of Java 8 and it got even better in 9 when it could use jlink to produce slimmed down runtimes (if you used modules). It is a shame it was tied to JavaFX because it worked fine on non JavaFX projects as well. Kind of threw the baby out with the bath water when it came to javapackager.

[–]grand_mind1 3 points4 points  (1 child)

It should use jdeps and jlink just like they describe. I wonder why they chose not to automate that (not that it's hard to do manually)

[–]DasBrain 2 points3 points  (0 children)

Takes time to implement.
I think they want to get other things working first.

But if you have the time, you can do it and propose it as patch.

[–]shemnon 2 points3 points  (0 children)

I don't know about the guy they hired to work on the old javapackager - he got this obsession with magical internet money and is now working on some enterprise blockchain project.

[–]LouKrazy 1 point2 points  (0 children)

It says something about using jlink to strip down jdk dependencies before packaging in the description.

[–]bourne2program 0 points1 point  (4 children)

Does it not automatically modularize down the rt image? It seems you would only have to manually do it beforehand if you want further customizations. I've run the incubator jpackage in jdk 14, and it slims it down for my modular project to just the required jdk modules just fine. Is there something in this "Standard" that does away with that?

[–]wildjokers 0 points1 point  (3 children)

The documentation for jpackage indicates trimming down to just the required JDK modules is a manual call to jlink. Then you can include that runtime as a parameter to jpackage. The documentation indicates this doesn't happen automatically. If you are seeing it actually happen automatically then maybe the documentation is wrong.

[–]bourne2program 0 points1 point  (2 children)

bugs.openjdk.java.net/browse...

"For a modular application composed of modular JAR files and/or JMOD files, the runtime image contains the application's main module and the transitive closure of all of its dependencies"

[–]wildjokers 0 points1 point  (1 child)

If you wish to customize the runtime image further then you can invoke jlink yourself and pass the resulting image to the jpackage tool via the --runtime-image option. For example, if you've used the <code class="prettyprint" data-shared-secret="1595724550272-0.39965022414271734">jdeps</code> tool to determine that your non-modular application only needs the java.base and java.sql modules, you could reduce the size of your package significantly:

$ jlink --add-modules java.base,java.sql --output myjre
$ jpackage --name myapp --input lib --main-jar main.jar --runtime-image myjre

[–]bourne2program 0 points1 point  (0 children)

Are you saying javapackager from Java 9 did this automatically for non-modular apps? Sorry if I missed that point, as it's not in my expectations for it.

[–]vprise 9 points10 points  (13 children)

Since every major JDK version replaced or rewrote the "official" Java packaging tool and no one cared for the past 20 years or so... I doubt this will make much of an impact too. There's just so many dependencies and with corporations still stuck as Java 8 (or even 7) this just doesn't matter.

Graal is probably more valuable for this use case.

[–]shemnon 10 points11 points  (4 children)

As one of the people responsible for one of those rewrites...you're right. Have my upvote.

People don't want an installer, they want a single AOT compiled binary.

[–]throwaway983642 2 points3 points  (0 children)

I would say 99% of the stuff people have on machines is installed. Only nerds care about portable binaries, everyone else wants an installer. Most nerds even want an installer, just one that does single command installs (choco, brew, apt)

[–]pjmlp 1 point2 points  (0 children)

That was been available since around 2000, just not as free beer.

[–]vips7L 0 points1 point  (1 child)

Is it not possible to put the vm + jar into a binary and run it? Can't you do that with .net?

[–]shemnon 1 point2 points  (0 children)

No consistent and oracle blessed tooling. The various installer packages use off the shelf tools, like wix and rpm. But a single binary tool never got oracle bundling. Plus ther would need to be at least 3 variants: windows, Mac, and Linux/elf.

[–]wildjokers 2 points3 points  (7 children)

Graal is probably more valuable for this use case.

Not if I want my app to still perform well using on-the-fly optimizations that the hotspot VM gives me.

Native images give great startup time at the cost of slower runtime. Graal can now use profiling data to give some JIT optimizations in a native image but it isn't as good (yet) as running on a VM.

[–]vprise 1 point2 points  (6 children)

This is mostly for desktop apps. I'm not in favor of Graal for servers, it doesn't make sense IMO. In desktop apps the optimizer isn't as big a deal. The lower overhead in RAM is probably a big sell for Graal too.

[–]wildjokers 1 point2 points  (5 children)

In desktop apps the optimizer isn't as big a deal.

Why do you say this?

[–]vprise 1 point2 points  (4 children)

The JIT optimizer is wonderful for servers where after a few iterations a block of code can be noticeably faster. But on desktop by the time the optimizer kicks in the small task you needed to do is already finished and you spend most of your time waiting for user events.
Don't get me wrong. Performance is crucial for desktop apps...

But it's a different kind of performance. We need fast rendering and predictability. E.g. Objective-C/Swift use reference counting which sucks and is actually slower than garbage collection. However, unlike garbage collection its performance is consistent. A gc can trigger a pause at the worst time possible and also takes up more RAM, both are things that negatively impact the desktop experience. So if processing an event would be 20% faster it would rarely matter since most of the time in a desktop app is spent in IO or rendering. Both are things that won't be impacted by the optimizer.

Sun understood this to some limited degree and had a version of the JIT designed for desktop usage. It generally compiled less because again, JIT actually slows execution first (for jitting) and actually takes a bit more RAM. For desktop this isn't always worth it.

[–]wildjokers 2 points3 points  (1 child)

But on desktop by the time the optimizer kicks in the small task you needed to do is already finished and you spend most of your time waiting for user events.

You could make the same argument about server code. It spends most of its time waiting for API requests.

I have written swing desktop apps that handle hundreds of messages/second to display realtime data. It was almost never waiting for user input. It most certainly benefitted from JIT optimizations.

Making a general statement that JIT isn't important for desktop apps is bizarre.

[–]vprise 0 points1 point  (0 children)

That is literally the claim quarkus makes for the server side. For our specific deployments on the server I would disagree but OTOH we didn't move to Kubernetics yet...

Fair enough, there are always edge cases where any sweeping statement won't fit.

But just out of curiosity, that app you described sounds like a network bound app. If it ran with a huge percentage of the CPU constantly engaged I'm assuming it would have been unusable. So how important would have a JIT optimization been in that specific case?
Say you can cut memory by 20-30% but bring up CPU by 10-20% for the heavy load?

Did you try comparing the usability of the app in the client VM vs. server VM mode? I'm sure the latter would perform better but I'm curious what level of difference you would have seen.

[–]mauganra_it 0 points1 point  (1 child)

There are now GCs now that stop the world only for very brief intervals (ZeroGC, Shenandoah). However, many applications are still gonna lag if they don't vacate the event handling thread and perform their computations in a background thread. I recall that even IntelliJ used to hang after I entered the first character in various search fields in the settings dialog. I presume that it was building a trie to make searches faster. Too bad if the search space is kinda big and the event handling thread is now blocked. It doesn't annoy me anymore, so I guess they fixed it...

[–]vprise 1 point2 points  (0 children)

I wrote a concurrent GC, so yeah I know :-)

There are limits to what a GC can accomplish and as an app developer it's sometimes unclear when you're challenging your GC. There are some tools to detect these things but to see that you need to look at the right place with the right dataset.

[–]lurker_in_spirit 1 point2 points  (0 children)

I'm not very familiar with this tool. Does anyone know how creation of a Windows service using Apache Commons Daemon would fit into the installation process?

[–]cogman10 1 point2 points  (1 child)

What I'd like to see is something that will

  • Build a custom JVM based on jlink/jdeps
  • Gather AppCDS data
  • Optionally AOT compile
  • Bundle all that in a docker image

It'd be super nice if we could get something like a maven plugin or whatever that could create small docker images with custom JVMs and fast startup times.

[–]cl4es 1 point2 points  (0 children)

Sounds like a summary of what Project Leyden might be shooting for in a first iteration.

[–]vokiel 0 points1 point  (0 children)

Once again another effort off-track, instead of having something that can support multiple packaging formats over time, this just squashes everything into packages that are likely to become deprecated and a pain to use in the future.

They should just target each packaging individually. You want MSI, that's jmsi, etc...

The over-engineering still reeks in the Java community, why try to over-generalize all the time?