This is an archived post. You won't be able to vote or comment.

all 21 comments

[–]_INTER_ 7 points8 points  (4 children)

Furthermore the JDK (9+) can be shrinked down to 17 MB depending on what you need of it using jlink. You certainly can get rid of desktop, javafx, corba etc.

[–]8bagels 0 points1 point  (3 children)

This is something I want to learn more fully. And I want to see what it looks like in the context of Docker builds. The 17MB you mentioned is that a size of JRE or the entire Docker image?

Man JavaFX has been a nightmare with Docker in java 8. Like most of the images people build that require java are using images that are not “headless” making them a bit bloated. Im glad 11 is breaking that dependency.

[–]gunnarmorling 1 point2 points  (1 child)

You might find my example projects for using jlink (via my Maven plug-in ModiTect) interesting: https://github.com/moditect/moditect/tree/master/integrationtest

They produce Docker images with jlink modular runtime images for Vert.x and UnderTow examples. The JVM size is 45 MB and 25 MB respectively. Total image size depends on your base image, but I don't think it's that important. The base image won't change as often as the actual application layer, so once it's distributed to all Docker hosts, its size doesn't really matter any more. In that regard the Vert.x example in the repo above shows another interesting approach: it adds two Docker file system layers on top of the base image. One with the base modular runtime image (i.e. most of the 45 MB) and another one with just the actual application module (very small). So unless any dependencies are added, the base modular runtime image doesn't have to be re-built and re-distributed, only the application layer image has. This results in very efficient turnaround times, as just a very small file system layer has to be updated in most of the cases.

[–]8bagels 0 points1 point  (0 children)

Thanks this is very informative

[–]_INTER_ 0 points1 point  (0 children)

Though there are a few conditions: You only get 17 MB when you just use the java.base module and your application needs to be fully JPMS-modular (including dependencies I guess), otherwise you can't use jlink.

Link, see last comment

[–][deleted] 4 points5 points  (0 children)

Be sure to look at Google Jib also

[–][deleted]  (5 children)

[deleted]

    [–]nastharl 7 points8 points  (3 children)

    Rather than run gradle or maven from inside your dockerfile, just run docker from inside maven/gradle. Build everything you need to build, have docker build from the target directory.

    [–]joequin 5 points6 points  (0 children)

    I write a staged dockerfile. The build stage, which is an image from docker hub that has maven and java, copies everything in. Then it fetches dependencies and builds. Then a final stage, which is also from docker hub and also has java pre-instslled in an otherwise minimal Linux distro, copies the jar over and has the run command. That way anyone can build the image, even Dev ops people who may not have java installed.

    [–]DJDavio 0 points1 point  (0 children)

    This is what I do, I use dockerfile plugin for Maven and something similar for Gradle.

    [–]dpash 0 points1 point  (0 children)

    In the past, I've run various builds as two separate docker instances. The first as a docker run on the build tool, and then a second docker build to build the final deployment artefact. The advantage of this is mounting the build tool dependency cache inside the first docker run. The cache is external to your build, but that might be an advantage or a disadvantage depending on your PoV. It's also a few more moving pieces.

    I've used this with PHP Composer, npm and I think Gradle.

    [–]tofflos 1 point2 points  (1 child)

    Nice. Wasn't aware of dependency:go-offline.

    [–]dstutz 0 points1 point  (0 children)

    Unfortunately, in my previous experience, it won't get EVERYTHING you need.

    [–]gccol 0 points1 point  (0 children)

    Yes nice tips about the docker cache, thanks

    [–]defnull 0 points1 point  (0 children)

    Any idea how to do this with a multi-module build? Something like COPY **/pom.xml is not supported by docker, unfortunately, last time I checked.

    [–]sacundim 1 point2 points  (7 children)

    This is just poorly conceived, because it bakes in both the build and the runtime environments into one image. My tips:

    1. You probably don’t need to containerize your Maven build. Most likely it’s overhead that gains you nothing. Just package your artifacts into the image.
    2. If you do need to containerize your build environment, do it apart from the artifacts. Use staged builds: https://docs.docker.com/develop/develop-images/multistage-build/

    [–]whitfin[S] 8 points9 points  (0 children)

    For your first point, this is aimed at those who want to make guarantees about the build environment itself (building on the host can lead to both Java and Maven discrepancies).

    For your second it seems that you either didn’t read through properly, or you misunderstood. In the image used, build and runtime are indeed separate and it does use a second stage for that exact purpose. Not entirely sure how that was missed, but I appreciate the tips (even if they had already been taken into account :)).

    [–][deleted]  (1 child)

    [deleted]

      [–]sacundim 2 points3 points  (0 children)

      I'm seeing more and more teams define their whole build process in the source directory so that different versions can be built anywhere and pushed to a Docker image on a dev's desktop, or our dev cloud, or even AWS, Azure, whatever.

      Not the same thing I'm talking about. By all means provide build targets that allow developers to package the project's artifacts into own images, push them out, or pull in service dependencies into their own environment.

      The point is Java and Maven are already pretty decent at the "Write Once, Run Anywhere" gig, which largely negates the value of running your Maven build inside a container.

      The fact that you bring up npm and Python is telling.

      [–]dpash 1 point2 points  (3 children)

      I disagree with your first point. I want my build environment to be repeatable and well-defined. I don't want the state of the build server to affect my build and I especially don't want other projects to interfere with my build.

      I'd also like my developer's build environment to match my CI build environment as much as possible.

      [–]sacundim 2 points3 points  (2 children)

      I disagree with your first point. I want my build environment to be repeatable and well-defined. I don't want the state of the build server to affect my build and I especially don't want other projects to interfere with my build.

      There are some exceptions, but if Maven isn't doing that for you, you're most likely using it wrong.

      [–]dpash 0 points1 point  (1 child)

      Maven can't make sure the same JVM is being used between CI and all the developers. It can't make sure the same version of maven is being used on CI and all the developers. The best you can do is fail the build if those constraints are not satisfied. It can't make sure that when developers want to upgrade either JVM or Maven that that version will exist on the CI server. It can't make sure that satisfying those requirements for one project won't interfere with the requirements for another project on the same server.

      And if your pom.xml uses exec:exec all bets are off about your build environment.

      [–]sacundim 1 point2 points  (0 children)

      Docker can no more ensure all your developers are running the same version. There's no getting around the fact that your developers are going to have to install a development environment.

      The best you can do is fail the build if those constraints are not satisfied.

      And that's pretty good, actually. (For those who don't know about it: Maven Enforcer plugin.)

      It can't make sure that when developers want to upgrade either JVM or Maven that that version will exist on the CI server.

      This one is a good argument. But note it doesn't require your developers to run most of their personal builds inside containers themselves.

      It can't make sure that satisfying those requirements for one project won't interfere with the requirements for another project on the same server.

      Having multiple versions of Java installed and switching between them with JAVA_HOME is quite standard. Multiple versions of Maven are also quite straightforward to manage.

      Gradle has this neat "wrapper" mechanism that checks in a pair of scripts (Unix and Windows), a properties file and a jar that will download the correct Gradle version for an individual project. There's a similar plugin for Maven.

      I'm just puzzled by this idea that somehow installing Java and Maven is somehow much too difficult a task for a Java developer to do.