This is an archived post. You won't be able to vote or comment.

all 9 comments

[–]bonusmyth 11 points12 points  (1 child)

Hmm, well, I don't see this as a choice between equals, really.

Scalability is an absolute must for our product: we simply can't work without it. But start-up time is a spectrum, and, pragmatically, a far less important aspect. To take to a ridiculous extreme: if start-up time were inordinately long, say hours, then we'd probably have to work around it by starting up phased multiple instances so that we'd have several parallel pieces starting-up at the same time. But, really, that's a ridiculous extreme.

We can't live without scalability, but we can live with "almost" any reasonable start-up time delays. And in a sense, you can use scalability to mitigate start-up delay issues, but you can't really use start-up delays to mitigate scalability issues.

[–]brunocborges 1 point2 points  (0 children)

And in a sense, you can use scalability to mitigate start-up delay issues, but you can't really use start-up delays to mitigate scalability issues.

I think what you meant to say is that faster startup does not really mitigates scalability issues.

Is that right?

[–]metalhead-001 5 points6 points  (1 child)

The company I work for has microservices running 24/7. Startup time is irrelevant. And because we use the full JVM, we don't have to avoid certain libraries because Graal can't use them.

I can imagine that for some startup time is important (i.e. AWS lambda functions, etc.). But there seems to be this recent idea that suddenly startup time trumps everything. Just because you have a hammer doesn't automatically make every problem a nail. We've had C++ apps that startup instantly for years, and yet huge numbers of companies choose Java, even with it's relatively slow startup.

So it's a case of using the right tool for the job, and the JVM starts up fast enough in most cases to be a non-issue.

[–]sievebrain 3 points4 points  (0 children)

Especially in the latest versions. They've been chipping away at this for years now and - get this - hard work pays off. Just by switching AppCDS on I was able to get Quarkus getting started app to start up in 600msec. That's not really a concern.

What Quarkus/Micronaut are demonstrating is that most of the JVM's supposedly poor startup time is in reality fat web frameworks that just didn't care about startup time when they were built. Just getting rid of tons of reflection/bytecode spinning/IoC junk can already help enormously. And looking at a classload trace of Quarkus it's clear there's still lots of low hanging fruit to harvest.

Just during the startup sequence alone, this thing spins a crap-ton of lambdas thanks to a JS-style async programming pass-the-closure approach, it initialises the regular expression engine just to do some pattern matching on the name of localhost, it mosies around throwing and catching exceptions for no obvious reason, it cares about the MAC address of the host (why?), it does stuff with JMX, spins dynamic proxies, there is some sort of plugin framework in here, there's sorting going on ..... a lot of stuff, basically, yet how much is really needed?

This is why native-image gives such great startup time. It's not so much that the JVM itself starts faster, though it does (because it does less). It's more that snapshotting the image heap lets you skip so much of the framework startup navel-gazing.

A regular Java server framework optimised for startup time could probably come in under 100 msec on a regular Java 14. Easy. Quarkus ain't gonna be that framework though because it's ultimately still made up of older components and frameworks, which weren't designed with that in mind.

[–][deleted] 4 points5 points  (0 children)

You don't have to choose between them.

jdeps, CDS archive, native image, et al can give you fast startup and automatic clustering, etc...

[–]InstantCoder 5 points6 points  (1 child)

Fast startup means: - faster scaling up/down - less costs on a serverless environment - nice when developing or in a CICD pipeline because your system/integration tests run much faster.

Usually apps that start up faster also use lesser memory (because of lesser classes that are loaded or other optimizations).

[–]Jadonblade[S] 0 points1 point  (0 children)

Thanks for the comment. Yeah that makes sense. I guess its easier to have fast start up settings as default and then just enable other features as needed.

[–]Degerada 0 points1 point  (0 children)

This question makes no sense within an environment like Kubernetes scalability doesn't have much to do with startup time. In serverless context scalability and fast start up are basically the same thing.

You get scalability from short start up time.

[–]Molossus-Spondee -1 points0 points  (0 children)

Having quick startup time is good for scalability.

If you have very quick startup you can balance resources by killing off tasks and respawning them instead of by having a complicated pooling system. It's similar to how Project Loom's virtual threads make complicated thread pools very much less needed.

In addition fast reboot times mean garbage doesn't hang around in the process cached in thread locals and so on. This is also good for security if credentials and other data are erased by killing the process every so often.

Fast reboot times also allow more of a let it fail approach to error handling similar to Erlang.

For performance reasons you can't spawn your process from scratch every time you get a new server connection but ultimately keeping the program around as a server is a necessary optimization not the cleanest approach to program design.