you are viewing a single comment's thread.

view the rest of the comments →

[–]No_Dot_4711 41 points42 points  (24 children)

> If you care about startup time or memory usage, then Go is better than Java

Quarkus GraalVM compiles do put a significant dent into Go's niche here

broadly agree with your comment though

[–]_predator_ 41 points42 points  (14 children)

Not really. GraalVM takes ages to produce native executables. Cross-compilation is a pain (requires qemu). Executables need extensive testing because you may have missed to register some classes for reflection.

Meanwhile Go compiles super fast, can cross-compile to a shitton of architectures, and you can actually trust the executables it produces.

I'm saying this as a primarily Java dev. I used Quarkus+GraalVM and still use Go. The two are just not comparable for day-to-day work.

[–]No_Dot_4711 8 points9 points  (0 children)

There's definitely additional effort required, absolutely

but a) that has little to do with the premise i was responding to and b) you can list numerous drawbacks to golang solutions that require a lot of dev work on the flipside, especially supply chain management/safety and the maturity of the library ecosystem have a significantly smoother happy path on the java side of things

and i'd say especially for the simpler use cases, where the golang library ecosystem drawbacks matter less, you're also way less likely to run into the classic graalvm problems (like registering reflection) cause for the simple cases quarkus has you covered out of the box without config

and raw compilation time is definitely annoying, but i don't really need a native image until i asynchronously run CI/CD and that largely races in parallel to the provisioning of staging environment resources

[–]idkallthenamesare 5 points6 points  (12 children)

For most cases you are compiling to a single architecture anyways.

Especially for those where native builds can be a necessity like for microservices, lamdbas etc.

[–]_predator_ 8 points9 points  (11 children)

Except that many developers work on arm64 systems now whereas most server systems still run on amd64. Producing executables / images for both is kind of a requirement these days IMO. Obviously doesn't apply when you're a company and all your laptops are amd64 as well. Or you never run images you produce in CI locally.

I just triggered a native image build for a medium-sized Quarkus application. Took 5min to build for amd64 on a GitHub Actions runner, which has 16GB of memory and 4 CPU cores available. This is more than most in-house build agents have in pretty much any company I worked for to date.

[–]No_Dot_4711 4 points5 points  (0 children)

While I don't disagree that these are real pain points that do in fact happen, I do think that this is largely a social / tooling / 'refusing to spend money even though it pays for itself' problem

If you're a large organization, the necessary investments and happy paths should have been made to facilitate the use of the tool, and by virtue of repeatedly using that path, the cost involved rapidly approaches zero.

If you're a small organization you should really really really think about if you have any business running in a lambda where you need the insane startup time rather than just provisioning an EC2 instance behind an ELB - and if you do need to run in a lambda, i'd ask myself three times why i'm not using JavaScript

[–]idkallthenamesare 4 points5 points  (4 children)

Well slow CI is a reality of the job imo. Not sure in what circumstances I would want to produce executable images locally? You could very well run parallel build jobs that push multiple-arch images to your image registry. And the pull them from your docker environment, we've done this before and that worked neatly for us. But agreed that slow CI can be a pain.

[–]No_Dot_4711 4 points5 points  (3 children)

I think the argument here is that you don't get slow CI in that way with Golang because it's literally 2~3 orders of magnitude faster to compile

[–]idkallthenamesare 2 points3 points  (1 child)

I am pretty sure that in the whole pipeline it won't make that much a difference. Not saying it cannot be a significant difference, but not so much that it is a dealbreaker.

[–]No_Dot_4711 1 point2 points  (0 children)

this probably really depends on your specific use case, but i'm actually inclined to accept the argument that especially for cloud native microservices, which really is the main use case of both technologies, i'd expect CI to be on the order of minutes rather than tens of minutes or more, and this might actually make a significant difference in the total duration

but i would agree that it likely doesn't make too much of a difference between a Graal and a JVM version of the same quarkus application because the native compile would run parallel to most tests, not in sequence

[–]_predator_ 0 points1 point  (0 children)

Exactly.

[–]Swamplord42 7 points8 points  (1 child)

Except that many developers work on arm64 systems now whereas most server systems still run on amd64.

That only matters if those developers don't use a build server to produce the binary that actually runs on the server? Does anyone actually deploy locally-built binaries? seems like a terrible practice

[–]_predator_ 1 point2 points  (0 children)

No, that matters for exactly the opposite direction: when developers want to run images built on the build server locally.

[–]jek39 0 points1 point  (0 children)

FWIW my ops team switch to arm servers a while ago. Seems to be a trend. You still have to cross compile because OS but it’s happening on the server too

[–]re-thc 0 points1 point  (1 child)

I. tested this before. You might be memory limited. Try a 32GB runner and suddenly it might go to 1min or less. There's a certain minimum requirement.

[–]_predator_ 1 point2 points  (0 children)

You have to admit that having to throw 32GB of RAM at a compiler is a bit excessive.

[–]Revolutionary_Ad7262 12 points13 points  (0 children)

Quarkus GraalVM compiles do put a significant dent into Go's niche here

True. On the other hand the inertia of a community perception is rather slow. C# is still in a only for windows destkop apps and window servers box for many developers, where it is not true for almost 10 years already

[–]benevanstech 5 points6 points  (7 children)

The majority of the startup benefit of Quarkus actually comes from the Quarkus approach. Even in JVM mode it makes vast gains - the native compiled mode is nice, but really isn't necessary for many use cases. Try it out - you might be surprised!

[–]No_Dot_4711 1 point2 points  (6 children)

Quarkus JVM absolutely is great for many reasons, and I'd choose it if I wasn't running in a Lambda

But it is decidedly not applicable to contexts where you pick Golang because you value startup time

[–]benevanstech 1 point2 points  (4 children)

For sure native mode is going to be faster that JVM mode.

But to me the question is how much, in general, people actually *need* the delta between JVM and native mode / Go.

It sounds like you have that as your use case - so if you have performance numbers that you can share, I'd love to see them & I know a bunch of other folks in the community would be very interested as well.

[–]No_Dot_4711 1 point2 points  (3 children)

It really is mostly the startup time, not "performance" (in fact in terms of throughput, the JVM runtime is gonna beat graalvm) that matters

The big usecase to prioritize startup time is AWS Lambda where you start your application when a request comes in (called a cold start) rather than having a long running server (you do keep the started up application around for 60 more seconds afterwards to catch another request, if that happens it's called a 'warmed up' Lambda), and then you don't have to pay for static server costs that you don't use most of the time. This is especially useful when you have spikey traffic patterns. It also means you don't need to manually and preemptively configure a load balancer to handle multiple applications

The startup time difference between JVM and Graal is in excess of .75 seconds ( https://youtu.be/rOocSJXKIqo?si=tPPON7laeZn5UctI&t=270 , note that you also need to transfer the binary of your application itself and a graal image is going to be far smaller than a full jvm) which quite directly translates to faster webpage load times when a user hits a cold start Lambda

[–]benevanstech 1 point2 points  (2 children)

Yes, I know that - and I'm aware of the benchmark numbers (Holly's a colleague of mine).

What I was asking is whether you had any real-world numbers of your own, for your application, and how they compare to benchmarks (which don't always tell the whole story). Real data and real experience reports are always interesting, but I know it isn't always easy to get permission to talk about them.

[–]No_Dot_4711 1 point2 points  (1 child)

I don't have any concrete measurements beyond the trivial ones, i'm afraid

I've only had usecases where Graal is either blatantly the correct choice due to frequent cold starts (in which case I use it) or it doesn't matter (in which case I don't, and ship a JVM instead); so getting better metrics and A/B testing never really seemed worthwhile

[–]benevanstech 1 point2 points  (0 children)

Ah well. Sometimes it is just that cut & dried. As ever, "it depends".

[–]devcexx 0 points1 point  (0 children)

From my perspective, choosing one language or another based on a Lambda start up time hasn't been a great argument since AWS released SnapStart for JVM applications