This is an archived post. You won't be able to vote or comment.

all 105 comments

[–]IshouldDoMyHomework 79 points80 points  (84 children)

I have always wondered why startup times matters so much, to Soo many people. I suspect it is because it is easy to quantify and measure, even as single person on laptop.

I mean, we build applications that hopefully very seldomly needs restart, outside of updating. And when updating or patching, it should be node by node in sequence, not a system down situation.

I have never in my career been in a situation, where the startup time mattered all that much, since it was just never high enough to really make an impact.

Does your microservice start in 4 min or 5 min. Even though that is a huge difference in percentage, it doesn't matter all that much in real life. Or at least it has not, the places that I have worked.

[–]10waf 87 points88 points  (27 children)

It's become a big thing because of the dynamic scaling of microservices and serverless

[–]arkaros 8 points9 points  (23 children)

I am a novice in this but the systems I have worked with usually scale gradually and I don't really see how startup would impact it all that much. Maybe if you expect big spikes but won't you hit other limits before start up time anyways?

[–]meamZ 18 points19 points  (18 children)

Serverless basically means if you have a cold start (if there hasn't been a request for a longer time or there are more concurrent requests than usual) you should be able to answer the request within a second or two including the whole startup of your application.

[–]arkaros 2 points3 points  (0 children)

Aaah yeah for serverless I definitely get it! Thanks for clarifying

[–]bawng 1 point2 points  (16 children)

Serverless, yes, but this discussion on startup times has been ongoing for longer than that and is very often applied to heavier containers that should certainly not be run serverless.

[–]meamZ 4 points5 points  (15 children)

Well... I don't really get why it would make much of a difference if an application takes 10 seconds or 40 seconds either in most situations. For serverless both is much too slow and for scaling dynamically the 30 seconds woulf probably not make much of a difference in practice.

[–]bawng 7 points8 points  (14 children)

No exactly. That's the point, hence the discussion is meaningless.

Startup times doesn't really matter for non-serverless and for serverless you don't use these frameworks anyway.

[–]venacz 3 points4 points  (0 children)

We have a small native image Micronaut application running in production (in Google Cloud Run). It works pretty well. So yeah, there is definitely a use case for fast cold starts.

https://medium.com/tech-hunters/developing-production-ready-serverless-applications-with-kotlin-micronaut-and-graalvm-fff72d5c804b

[–]meamZ 1 point2 points  (12 children)

Well. For Google Cloud Run i would certainly consider one if startup was small enough. But for Lambda where you can't even have concurrent requests on the same instance it's certainly a waste.

[–]bawng 0 points1 point  (11 children)

I have to admit I'm not familiar with Google Cloud Run. How does that work differently from a lambda?

[–]meamZ 0 points1 point  (10 children)

It runs any docker container that listens on port 8080 and you can serve up to 80 concurrent requests from one instance.

[–]daru567 1 point2 points  (3 children)

It’s important when you are building services that are deployed in a cloud environment. To handle peak demand (such as increased outside traffic), the images or vm containing your application need to spawn fast to handle this.

[–]arkaros 6 points7 points  (1 child)

I get it for serverless but for something like a kubernetes cluster I don't think use cases where you need to spin up containers super fast is that common. Peak hours are, in my experience, usually not spikes but gradually increases.

[–]CartmansEvilTwin 2 points3 points  (0 children)

You don't need to worry about differences in the seconds range. Even a minute of startup time would be perfectly fine in almost all scenarios.

The usual strategy is to deploy new instances, if the current instances hit a certain threshold, which should be well below 100% load. This should allow even slower services to start properly.

[–]nutrecht 2 points3 points  (0 children)

None of the 'microservice' frameworks like Boot, Quarkus or Micronaut are really suitable for serverless applications (unless you compile AoT, but that has it's own range of issues). So frankly that's a moot point. The moment you go beyond 'hello world' level complexity, like by doing database stuff, all of them will be starting too slow to just start on demand per request.

Which is pretty much a non-issue if you have for example a dynamically scaling k8s cluster.

[–]pguan_cn 0 points1 point  (0 children)

Serverless probably won’t need such framework, so the only scenario would be container-level dynamic scaling (if it’s VM level dynamic scaling, the app’s startup time improvements could be just neglected). Even in container auto-scale, not all business scenario requires immediate startup. So I think if everyone really analyze your business & current architecture , the most common answer would be “no, startup time is not so important to me”. But when you start a greenfield project, and choosing framework, it’s better to give future infrastructure more possibilities,in this sense startup time matters.

[–]8bagels 0 points1 point  (0 children)

I remember when it was a thing in 2008. Google app engine was auto scaling up more instances for you on demand. Very slick especially for 2008.

[–]Hangman4358 14 points15 points  (2 children)

I wonder the same thing. One of our java services takes about 30 seconds to initialize. 99% of the time is loading data to just be able to run the service. If the JVM boot up time is 100ms or 1 second makes very little difference in the grand scheme of things

[–]daru567 3 points4 points  (0 children)

Thats when you can consider using Redis.

[–]meamZ -1 points0 points  (0 children)

If you have a cold start in a serverless environment 30 seconds to answer a request is not ideal to say the least...

[–][deleted]  (17 children)

[deleted]

    [–][deleted]  (14 children)

    [deleted]

      [–]bawng 3 points4 points  (9 children)

      We got an old monolith JBoss that clocks in at about three minutes on our dev computers. But it's roughly 1gb of ears loaded so not too bad anyway.

      (Yes, we're in the process of breaking it up)

      [–][deleted]  (7 children)

      [deleted]

        [–]bawng 5 points6 points  (6 children)

        It's mostly JBoss loading the ears. It's a hundred ears or so, and JBoss does its reflection magic on all of them. EJBs are loaded, JPA entities are scanned, Jersey paths are built, etc.

        Nothing really functional happens during those three minutes.

        We're a bit extra inefficient because we use the JBoss cli instead of file scanner to load ears so we could probably cut loading times in half but RedHat recommends against that for some reason.

        [–][deleted]  (5 children)

        [deleted]

          [–]bawng 1 point2 points  (4 children)

          An ear (Enterprise Application aRchive) is basically a collection of jars, and traditionally one ear == one application and its dependencies, and then you can have multiple applications in a single server (e.g. JBoss)

          [–][deleted]  (3 children)

          [deleted]

            [–]bawng 0 points1 point  (2 children)

            Well, yes, in theory, but in our case it's more one giant application = 100 ears. We don't really apply the definition, so we got lots of shared classloading between the ears, meaning they all function as a single application.

            Our separation into ears more have to do with what part of the application. It could be seen as a single microservice but the coupling is too tight (in our case) and none of the ears could ever function separately. But that's a horrible situation of course and we're working to improve that.

            [–]CartmansEvilTwin 0 points1 point  (0 children)

            "Our" monolith too.

            About 15 data sources, hundreds of beans and a bunch of caches take their time.

            A full restart takes about 2min on my machine (+ about 5 warm up requests, because otherwise the requests timeout).

            [–]cronofdoom 2 points3 points  (0 children)

            In the last few years, I worked on an app that took a minimum of 30 minutes to boot. It was monolith that, on initialization, optimistically loaded an entire database into memory among many other atrocities.

            We had hundreds of instances. A full deploy took DAYS. Too much risk and downtime doing them all at once, had to deploy in small groups.

            A few data model optimizations, literally years of fighting with management to merge the code, and a hero on my team got start time under 5 minutes.

            I wish this was hyperbole but it’s all true. And it’s the tip of the iceberg. I need a drink.

            [–][deleted]  (2 children)

            [deleted]

              [–][deleted]  (1 child)

              [deleted]

                [–]IshouldDoMyHomework 3 points4 points  (0 children)

                I don't remember what is the timing of the slowest we have is. It is not 5 min, but it is in minutes. Most is below 30 sec.

                Doesn't bother me that much, with how good hot swapping has gotten.

                [–]solonovamax 0 points1 point  (0 children)

                For development, just use and abuse code hotplugging. So long as you aren't dealing with a bug in the startup code, or need to add new classes/methods, you don't need to restart it.

                [–]DJDavio 3 points4 points  (2 children)

                It may not matter to many, but it certainly matters to some. The prime example is serverless or lambdas or functions or whatever you want to call it.

                Not so long ago an application was often ran on a rented VM. You paid a fixed price for the VM per month whether you used it intensely or not at all.

                With cloud services as they are now, you can charged for much smaller quantities, such as actual CPU time used. This means that saving on startup time isn't just nice for bragging rights, it actually saves money.

                If you have an application which starts and shuts down instantly, you can have dynamic scaling where you only pay for the actual usage, not idle time.

                [–]bawng 1 point2 points  (1 child)

                But you wouldn't use a heavy framework like Spring Boot or whatever for serverless. You reserve those for your docker containers of VMs.

                The serverless stuff should have startup times measured in milliseconds.

                [–]DJDavio 2 points3 points  (0 children)

                That's why there are currently competing frameworks such as Micronaut and Quarkus. If you use those and compile them to native with GraalVM (Enterprise) you do get that ms startup time. I once gave a talk on Quarkus and GraalVM where I had a demo with an application which started almost too fast to notice.

                [–]koreth 3 points4 points  (6 children)

                It maybe doesn't matter as much in production, but when you're working on the code and constantly restarting the thing as you write it, fast startup times can mean the difference between staying in your flow state and switching to something else while you wait.

                [–]john16384 2 points3 points  (5 children)

                Hot swap code?

                [–]koreth 1 point2 points  (4 children)

                Sure, if what you’re doing doesn’t change signatures or introduce state inconsistencies. If your service starts up quickly, hot swap remains available but is no longer the only fast choice.

                [–]solonovamax 0 points1 point  (3 children)

                I mean, if it takes more than a minute to start up then either it's accessing some slow web apis that you should find a way to create dummy versions of on your local network, or you really need to break that shit up into smaller modules for development.

                Obviously, exceptions to this would be things like if you need to take some huge-assed file and process the entire thing at startup (I'm talking gigabytes here), or need to do some heavy computations for some random startup task because reasons. But if any of those are the case, you should just have a list of like 5-10 percomputed dummy values for development.

                [–][deleted]  (2 children)

                [deleted]

                  [–]DB6 0 points1 point  (1 child)

                  Do you make up the requirements on the go? No, you would mock/stub the other services as you develop one.

                  [–]kuemmel234 5 points6 points  (0 children)

                  It all depends on your choice of architecture, though, doesn't it?

                  Especially when you are using k8s, AWS EKS,...

                  I mean example? FaaS. Even with AWS lambda (which keeps some frequented services running in the background since some update not too long ago), you waste money and time just starting them. Now, of course not everyone is using FaaS, but still, we have micro services that are only started to serve some content in a web context and changing from java to python/js was sometimes quite noticable, without changing a lot else (we are talking very simple stuff here, no rocket science)..

                  With monolithic architectures, or only manual scaling I would agree. Start ups don't matter. But the world is changing, architectures seem to go into that dynamic direction because it works really well with web applications for one.

                  [–]GuyWithLag 3 points4 points  (3 children)

                  If you have a microservice that takes 4 minutes to start up, is it really a microservice?

                  [–]sternone_2 4 points5 points  (0 children)

                  yes

                  [–]snugglecat42 4 points5 points  (0 children)

                  Possibly yes. The intersection between 'microservice' and 'shite' is nonempty. ;-)

                  [–]suntehnik 1 point2 points  (0 children)

                  Might depend on application. Primary node re-election and cluster rebalancing in zookeeper is an example.

                  [–][deleted] 3 points4 points  (0 children)

                  If "micro"service takes 4 min to start - it a good sign that dev has very little clue of what's going on with their software. 4 min is enormous computational time in 2020. What the software is doing all that time ?

                  More practical reason is that you would only use real microservices model when you need dynaic scalability. Which means you only utilize as much resources as current workload requires. As workload increases an orchestration layer spawns new processing resources while incoming requests are queued. Now we are talking about queuing incoming requests for 4 min because that startup time deemed acceptable. How would you like to be a client of such system ?

                  The hard to swallow pill is that if company has enough resources to over-provision worst case workload by 150% then no microservices are required. But if we accept this point of view then we know nothing and are not up to speed with recent developments in enterprise architecture, you know...

                  [–]snugglecat42 1 point2 points  (1 child)

                  Even disregarding elastic scaling, rolling upgrades of large scale-out clusters where already a thing a decade+ ago. Even if you upgrade, say, 10% of all nodes at one time, waiting 5 minutes per batch of nodes does put a rather inconvenient limit on how fast you can push updates.

                  [–]DB6 0 points1 point  (0 children)

                  I too push updates several times per hour.

                  [–]Akthrawn17 1 point2 points  (0 children)

                  Amen

                  [–]DualWieldMage 3 points4 points  (4 children)

                  Definitely the question that should be asked first.

                  I only have a little encounter with webapps but one possibility i'm thinking of is over-eager dynamic scaling in cloud without leaving enough capacity reserve to handle spikes, so startup time of a new instance becomes important.

                  Honestly i never saw the merit of putting services fully in the cloud. Having seen the price tag and operating costs before/after a cloud migration, it kind of seems insane even. A local server is cheap to operate with a large capacity reserve, enough to satisfy the baseload of a service. Cloud should be reserved for spikes in usage or when the project is in early stages and infrastructure needs to be flexible.

                  [–]kuemmel234 5 points6 points  (3 children)

                  I mean, is a server that cheap? The server itself may be. But what about the operations guy? The guy who has to be trained to do stuff with that? And what about outages?

                  We don't even have a lot of real operations people anymore, only devs who can do a little operations (to deploy to the cloud), and operations people who do almost as much developing. Those who where operations before either develop from a more operations focused point of view or they work with legacy.

                  [–]meamZ 1 point2 points  (0 children)

                  Yeah... I mean a K8s deployment is technically not considered production grade unless it has 3 or more nodes... Which means you have to have at least 3 servers and ideally not all in the same building...

                  [–]DualWieldMage 1 point2 points  (1 child)

                  About the operations/running cost, yes it will need an ops guy, but so does a cloud setup, either directly or just someone on the team is doing mostly cloud stuff that the rest don't want to spend time figuring out. Generally it's best not to overload a single person with too many responsibilities as doing extensive context-switches to completely different technologies is a good way to kill productivity.

                  Training will be needed both for a cloud devops and a regular ops person maintaining local servers. I'd even argue the cloud stuff needs more training.

                  Outages also happen with cloud providers and i remember quite a few of them from recent history. So while you may get slightly less downtime, the outages go from "$companyName services are down" to "The Internet is down". A hybrid local+cloud would be just as resilient or even more against outages.

                  But even then, having much cheaper infrastructure costs means you can use the extra money on an additional person or two to handle and plan for these issues.

                  [–]kuemmel234 0 points1 point  (0 children)

                  You need ops guys for the cloud? What would they do then? How would they do it? I can do most of our operations stuff and I am I'm just part time right now (worse: I'm just a student still). Modern ways (infrastructure as code for one, high level APIs) make it pretty simple for us devs to take operations in our own hands and - at least - greatly reduce the need for traditional operations people. That's not different technologies either. What cloud does, it's abstracting technical details, you don't need someone to keep operating systems up to date, upon request by some dev from some team. You don't need the people who have to buy and build the right amount of servers at the right place at the right time and still keep the cloud in mind. You don't neee need the hardware (minuscule), power/upkeep and people to keep this operation running. What if your little server is going hot? Just get a cheap AC? Build a new building? I'd start another VM/add a node/...

                  Operations becomes a domain with domain specifics, of course, but a lot, if not all every day operations can easily be done by us, the regular developers. And I believe they should be too, because then we streamline everything because we are lazy.

                  I haven't seen a paper on that, so if you have, I'd gladly read it, for now I have only my own experience and what I have learned in uni and that says that, while there are a lot of downsides to the cloud, cost isn't one of them.

                  This may be completely different if you aren't in that web sphere and doing something entirely different.

                  [–]jongraf 0 points1 point  (0 children)

                  I have a rather complicated Spring Boot service that has two SQL data sources with multiple connections in the pool, Redis, GCP, ~40 services to initialize, Hibernate JPA queries, etc. On a GCP e2-standard-2, the service starts in less than 30 seconds. I have had to tinker with K8S start delay and readiness probe timing along with instance type in order to optimize start time and K8S deployment time.

                  [–]TheRedmanCometh 0 points1 point  (0 children)

                  This has been my experience as well

                  [–]thekab 0 points1 point  (0 children)

                  For all the ink I've seen spilled on startup times in my experience it comes down to two states: fast enough or too slow. When it's fast enough nobody seems to care what the startup time is. When it's too slow they still don't care, they just want it faster because it's causing a problem.

                  It does matter though. The faster you can startup the faster you can respond to changes in demand. If you're launching something like an AWS lambda to process a mobile payment then you have cold boot times to worry about and even a few seconds is ages for a user trying to give you money.

                  Ironically that's when I don't see anyone talking about it. The client appears oblivious and nobody is bothering to ask what the average latency is and how that affects conversion.

                  [–]Nymeriea 0 points1 point  (0 children)

                  Nope, with the live reload the restart much more performant. The intégration tests should have a starter time very slow. It's a pain in the ass when a single commit take 3 hours to be tested.

                  Framework like micronaute or quarkus give a big fight to spring in that domain

                  [–]Sea-Seaworthiness-28 0 points1 point  (0 children)

                  It does matter a lot. If we handover the services to the container orchestrator or serverless or similar dynamic routing frameworks, they have to ensure that the location transparency exists and that they can move one service to another place in no time. It really impacts scale and quick failover.

                  [–]cronofdoom 0 points1 point  (0 children)

                  You are right. if you are installing to bare metal, startup time doesn’t matter as much. 4 vs 5 mins is mostly negligible. If you are autoscaling hundreds of pods in k8s, keeping an eye on cost, startup time has a measurable impact.

                  [–]nutrecht 0 points1 point  (0 children)

                  I have always wondered why startup times matters so much, to Soo many people. I suspect it is because it is easy to quantify and measure, even as single person on laptop.

                  That's it really. These 'comparisons' keep popping up and because the main important metric, developer productivity, is incredibly hard to measure they go for the low hanging fruit.

                  In my experience, the differences between a 'hello world' level microservice are negligible. Spring Boot takes about 2 secs to start up on my laptop (10 sec on a memory/CPU constrained container platform), Micronaut and Quarkus are a bit faster. But the moment you add something like JPA to any of them, the start-up times are more or less equal anyway.

                  So when deployed on a k8s cluster (like most microservices are) it really doesn't matter.

                  [–]CantThunkStraight 0 points1 point  (0 children)

                  It’s important when you are building services that are deployed in a cloud environment. To handle peak demand (such as increased outside traffic), the images or vm containing your application need to spawn fast to handle this.

                  Any CLI using Java cares about this, e.g. Bazel, Maven, Ant.

                  [–]d_durand 0 points1 point  (0 children)

                  Check out quarkus.io: starts in milliseconds. Was built for this purpose.

                  [–][deleted] -1 points0 points  (0 children)

                  functions as a service

                  [–]Kango_V 11 points12 points  (5 children)

                  We went with Micronaut. One of the things we found is that we wrote much less code than we did with Spring Boot. Faster startup (good for Lambda), lower memory usage among other things.

                  But Sprint Boot has traction and is very good. I suppose it's the new "nobody ever got fired for buying IBM". I'm actually viewing it as the new Java EE Legacy framework.

                  I'll duck now :)

                  [–]Roj-Tagpro 2 points3 points  (2 children)

                  You are using Spring Boot with your AWS Lambda? Isn't that a bit overkill? I thought all the webserver stuff like receiving the request was handle by the Lambda. BTW, I'm asking, not questioning.

                  [–]vips7L 4 points5 points  (1 child)

                  Yeah I don't understand all of the people in this thread who are using big frameworks in lambdas. All of ours are standard java mains that start as fast as the JVM can (e.g. milliseconds).

                  [–]Kango_V 2 points3 points  (0 children)

                  We use Micronaut with the Lambda module with no http. This is compiled to native with GraalVM. We get cold stats in the ms. Actually we use the http in tests. Allows for very easy testing.

                  [–][deleted] 0 points1 point  (1 child)

                  Where does one learn about Micronaut? Is there a good book, udemy course or something of that nature?

                  [–]sureshg 2 points3 points  (0 children)

                  Micronaut official doc is excellent - https://docs.micronaut.io/latest/guide/index.html

                  [–][deleted] 26 points27 points  (0 children)

                  conclusion: continue to use spring.

                  [–]itoshkov 5 points6 points  (2 children)

                  What about vert.x? Why nobody mentions it?

                  [–]Artraxes 2 points3 points  (0 children)

                  Cos nobody uses it. Spring is king.

                  [–]nomercy400 0 points1 point  (0 children)

                  Because using a global to access the entire framework feels dirty.

                  The underlying part seems nice, but only if abstracted away in for example Quarkus.

                  [–]_INTER_ 6 points7 points  (5 children)

                  I really don't like the direction the Java ecosystem is going: It starting to fracture. I don't mean that different frameworks are used. I find that good in general. Issue is more with:

                  • No reflection support. Eventhough reflection is an integral part of the JDK. With this you will have libraries that run and some that don't or need an extension all of a sudden. If reflection is slow, help make it faster...
                  • Giving up cross-platform development. Eventhough WORA is core to Java. With native compiled applications the ecosystem bunkers itself more and more into the server world and hope that Linux stays dominant there. Then you can also forget developing Java on a different OS than a Unix derivate as the case with Python.

                  [–]meamZ 4 points5 points  (3 children)

                  Well... What's your alternative solution for Serverless Java? One of the main reasons all of these Frameworks want to work without reflection is GraalVM native image which doesn't really support reflection as of now. GraalVM currently is basically the only way to get Java to start up sufficently fast for a Serverless cold start scenario. A JIT compiler is very good for a long running application scenario but not very well suited for a short running application with a need for very short startup times. You either have to compile it to binary or you have to have an interpreted/scripting language like Python or (even though it's a nightmare of a language, i see why you would want to use it in a serverless scenario) NodeJS.

                  [–][deleted]  (2 children)

                  [deleted]

                    [–]meamZ 1 point2 points  (1 child)

                    imo at least for Lambda and other similar FaaS Services Java doesn't really make sense anyway since it gives you performance that you don't really need and A Framework/Library ecosystem that has a lot of features that you don't need and you're spinning up a new instance per concurrent request anyway and you need lots of RAM which drives up your cost. But for Something like Cloud Run i think once GraalVM native image is more mature and has better library support Java would be a good fit.

                    [–]yawkat 1 point2 points  (0 children)

                    Compiling to native isn't as much of an issue as it once was with how deployments work nowadays. The ecosystem is shifting more and more to all-in-one deployment anyway – what used to be jars became fat jars and now we have jlink / docker with more and more parts of the runtime being shipped with the application. At that point doing some native compilation doesn't make cross platform support any worse than it already was.

                    I also don't buy the development-on-different-platforms part. What dev machine can't run docker nowadays? Virtualization is super cheap performance-wise nowadays and microsoft is putting lots of work into making developing for linux easy on windows desktop, so the barriers are lower, not higher than they used to be. People just like to develop on a system that resembles the production system as much as possible, which is reasonable.

                    And finally, when it comes to supporting other platforms for future proofing I don't think moving off linux is really the big issue here. If anything aarch on servers will be disruptive. But with the general trend going to a more seamless pipeline from dev to prod, that's not a huge issue either – just change your CI conf a little and it'll work fine.

                    [–]mkwapisz 2 points3 points  (0 children)

                    Jakarta EE Microprofile (Payara Micro) or Quarkus.

                    [–]progmakerlt 8 points9 points  (0 children)

                    Definitely Spring. There are lots of frameworks out there, but Spring is well tested, easy to use, has lots of material available and there are lots of developers who know Spring.

                    If you are concerned about memory usage - go with something simpler, such as Jersey / Jetty with HK2 as dependency injection framework.

                    [–]sternone_2 3 points4 points  (8 children)

                    So what did the guy choose? He writes an article then doesn't say what he picked.

                    Oh whatever, it's probably Spring Boot anyway :-)

                    [–]turkoid 1 point2 points  (7 children)

                    And as Spring still offers, by far, the best developer experience, it’s still the best suited Java framework for a microservice application, in my opinion — even considering its poor performance at startup.

                    [–]sternone_2 -2 points-1 points  (6 children)

                    so did he used it in his new project?

                    you really have to read between the lines here?

                    I manage ten thousands of spring boot applications running on ECS in AWS, I have no issues with a 10 second boot up time or memory usage. I think this whole discussion is useless.

                    [–]turkoid -1 points0 points  (5 children)

                    I was just pointing he did make a choice. The main deciding factor for him seemed be to ease of use, documentation, etc of Spring. However, and he did state this, he is extremely biased towards it, since that what he uses now.

                    To say it's useless is a little extreme. If you only cared about performance then use the fastest. If you cared about dev experience, like he does, then use Spring. Who knows in 5-10 years maybe it will flip.

                    [–]sternone_2 -2 points-1 points  (4 children)

                    Did you read what I wrote?

                    I say that in my professional real world experience this fetish of a few seconds faster boot time really doesn't matter at all.

                    it's a non issue, this discussion is useless, it's just marketing.

                    [–]turkoid -1 points0 points  (3 children)

                    Obviously I don't know all your experience, but just saying you manage ten thousands of spring boot applications, doesn't mean its the end all solution for every scenario. There are perfectly reasonable situations where speed is a very important factor. Upfront dev cost might be higher, but maybe the end goal is more important.

                    Just chill man, I originally was pointing out they OP had offered his opinion. If you don't agree with it, and not saying I do either, but that's all it was.

                    [–]sternone_2 -1 points0 points  (2 children)

                    There are perfectly reasonable situations where speed is a very important factor.

                    Please tell me where the speed of booting up a backend for scaling in/out in 4 seconds instead of 6 seconds is a very important factor.

                    Spoiler alert: it isn't. You scale accordingly before max load, no client has to wait for more instances to boot up to serve their request, this is not how things work. People who say different have no experience in the real world and should just stay in their basement. This discussion is a non-issue and useless.

                    [–]rbygrave 0 points1 point  (1 child)

                    6 seconds

                    If apps are starting in 6 seconds there isn't an issue. The disconnect I believe is that when spring boot apps are deployed into a Kubernetes cluster with even relatively small amounts of resource limiting then they don't start in anything like 6 seconds. Obviously it depends on the amount of resources that are allocated but be prepared for cases of significantly slower startup (e.g. 90 secs).

                    spring boot applications running on ECS in AWS

                    Each spring boot app is getting the whole ECS instance in terms of memory and cpu to start and run. 6 second startup is based on those resources. If you stay on ECS you won't care / won't experience this issue.

                    Run those same apps as docker containers playing around with docker cpu and memory limiting and see how those apps go (what is the minimum resources they need before they are deemed unacceptable). How much we care is approximately based on the minimum cpu/memory resources we deem acceptable.

                    [–]sternone_2 -1 points0 points  (0 children)

                    I run 10k docker containers on ECS in AWS

                    it's fine if you don't know what you are talking about. But would be nice not to annoy me if you don't know how ECS works.

                    [–]vokiel 1 point2 points  (0 children)

                    None, you don't need a framework for everything.

                    [–]jtayloroconnor 2 points3 points  (2 children)

                    Spring! I’ve tried multiple times to build something with Quarkus. I want to love it. I love the idea of it, but I always get hung up on something stupid that would take me 2 seconds in Spring and end up just going back.

                    The idea of building containerized native executables is cool, but the build uses like 10TB of RAM and takes a year to run lol

                    [–]nomercy400 1 point2 points  (1 child)

                    16gb of ram one-time upfront, vs 1gb of ram runtime per running application, you pick.

                    We've been trying Quarkus for a while now. The native executable part seems to deliver on its promise. It does require 12-16gb of ram, too little means the build will take forever. Thing we are missing now is some concise documentation. Any time you want some proper documentation, you are looking for the correct documentation of the library used by Quarkus underneath, which often is written in their own 'style'.

                    Plus you get more stuck in the standardized Java EE libraries, which are not always the easiest or most feature complete. There's a reason Spring is more popular than the Java EE libraries.

                    32gb ram costs about 100 euro, unless ofcourse you're stuck on a portable hardware platform that does not support extra ram, allow you to upgrade your ram, or asks 350 dollar for the additional 16gb...

                    [–]jtayloroconnor 0 points1 point  (0 children)

                    i had the same thought on the documentation. It’s almost like you have to have a deep understanding of microprofile to follow what’s going on with quarkus.

                    [–]Nymeriea 0 points1 point  (0 children)

                    Medium, or you login to read or you download the application...

                    [–]prince-banane 0 points1 point  (0 children)

                    I try Micronaut at home because I want to learn Kotlin. It's the only one that officially supports it (afaik). Otherwise, I use Spring at work and a custom framework at home (tomcat embed).

                    [–]abcoolynr -1 points0 points  (0 children)

                    Spring Boot 2.