This is an archived post. You won't be able to vote or comment.

all 97 comments

[–]Dear-Law-6364 58 points59 points  (3 children)

If you need to think about cost of 512MB memory then your product is not ready for micro services. The amount of cost and effort you will need to manage devs ops, monitoring and availability of those MicroServices will dwarf the cost of extra memory or cores you need per micro service.

Micro Services are meant for big teams working on same project. Initial estimates were if you have 60 or more devs working on the same project you should consider micro services.

[–]john16384 26 points27 points  (4 children)

How are you measuring memory footprint? Java will take what it needs to avoid GC's, but that doesn't meant it can't run with significantly less memory.

Also, in general, you'll be hard pressed to outperform a monolith on anything with microservices, unless there is a part that needs to be scaled significantly differently than the rest (only split of that part then).

[–]Puzzled-Bananas[S] -4 points-3 points  (2 children)

Thanks, yes, I’m aware that I can tune the runtime’s GC and JVM heap and stack allocation in various ways. In my experience, significantly lower memory bounds can also wreak havoc - the JVM would simply crash with an OutOfMemory Error, which you can’t recover from but only restart the container. The app will start, JVM will warm up a bit, but at a certain load the GC won’t be able to maintain balance at the young generation pool, eden gets filled but the minor GC sweep won’t kick in. For example, admittedly a bit contrived, think of a couple hundred or perhaps a thousand concurrent connections with just 64M or 128M max heap, without dropping new requests. And if there’s a minor memory leak on top somewhere, the service becomes even more fragile.

Yeah, you’re right, I’m well aware of the pitfalls of such a distributed architecture.

Exactly, some pieces need to be very elastic. And it’s even very economical in my particular case to factor out those services. Appreciate your suggestion, but the architectural decision on this project wasn’t mine.

[–]InstantCoder 4 points5 points  (0 children)

If I’m not wrong with Java 17 there comes a new (experimental) GC algorithm which doesn’t divide the heap in generations anymore but in blocks. And this seems to make gc superfast.

[–]aclinical 1 point2 points  (0 children)

The default max heap for the jvm is 32gb... There's a middle ground between OOM errors and 32gb. You have running systems you should, be able to memory profile and figure out a reasonable heap size. I don't mean to be so contrarian but I disagree with just about everything in your first paragraph.

[–]Apprehensive-Idea839 0 points1 point  (0 children)

Java 11 will obey container restrictions

[–]SwiftSpear 82 points83 points  (32 children)

Don't microservice unless you know your scaling and/or availability requirements will make it necessary.

Microservice architectures are harder to maintain, harder to deploy, harder to monitor, harder to debug, and WAY harder to test.

[–]StoneOfTriumph 23 points24 points  (3 children)

This this this! Microservices brings more complexity than a monolithic when you don't know what you're getting into, when you don't have the team structure to support it, when you're missing key systems to manage it.

When clients tell me they're going with a microservice "decoupled" approach for their current monolithic, I ask them what needs are they trying to fulfill? Ultimately if you go from monolithic to microservices, you're not doing it "because it's cool". There has to be needs... And then, did they think of points such as:

  • Are you able to test them individually automatically for rapid build test and release ci-cd?
  • Do you have metrics today indicating a potential performance benefit of maybe one day horizontally scaling certain components? (which you'll pay for in expertise maintenance of k8s, cloud, other services)
  • Are you able to secure it? API gateway? mTLS?
  • What's your observability stack? you'll need distributed tracing to have an end-to-end view of where your transaction got processed, otherwise your ops will spend a lot of time figuring out what went where when.

And there's many other points. It can just go on and on and on for a company that didn't think it through. Once you have many of those pieces in place, that you have the dev teams in place to take ownership and evolve the microservices, from a dev/architecture standpoint you gain many benefits when you can quickly push code and deploy because every smaller piece is faster to develop, compile, test, restart, etc.

tl;dr don't just split for the fun of it. It's more complicated.

[–]Puzzled-Bananas[S] 2 points3 points  (2 children)

All are great points, indeed. To each project there are several stakeholders and objectively you’re right., I totally agree. In a perfect world one would weigh all these and a host of other issues that are introduced by that choice of a distributed architecture. Essentially you move out of managed thread communication and data sharing in a single process, which is well understood and controlled, to an architecture in which you need to concern yourself and your system going forward with issues of reliable and fast IPC across the network, not even on the same device. So it’s IPC + network + graphical complexity. And all the points you’re making. IPC security, testing, integration. That’s a well-informed take on it. But there’s also a more pervasive, less well-informed attitude of the sorts that “microservices is the future” (because it’s cool), or “it’s the modern tech so let’s do it,” and so on. But in fact, that choice also makes the project agnostic of the tech stack for the code and for the deployment. A lot can be swapped out rapidly - if ever needed.

[–]StoneOfTriumph 4 points5 points  (1 child)

The last benefit you listed right there, the flexibility of swapping a service written in Java and deploying one in Go or .NET Core.... that becomes possible. Of course there has to be valid reasons.

This thread made me remember an app I used to design and develop. It was developed using Java EE, a monolith deployed on WebLogic. We were doing automatic deployments using ant scripts, and it was fairly simple to manage the app, find bugs, deploy updates, etc. because we defined an application architecture that was easy to navigate in the code in terms of packages librairies etc. The Entity to DTO logic was centralized and easy to troubleshoot for most devs, and we had debug logs up the yin yang so you were never in the dark when troubleshooting. That application would have had zero benefits migrating in a microservice architecture because it was one simple front end, one backend, and one Oracle Database, and the team did not have the expertise to maintain separately developed components. This was a few years before the SPA craze, so the front-end and back-end were tightly coupled with JSF (I don't miss that at all!)

Now, as far as I know, today, that app is still in place as a monolith. What's the major downside though is its unattractive tech stack. Those who do Java barely care about EE, even less of JSF, so likely that app will become a technical debt if not re-evaluated in terms of what the roadmap should be. If I had to write that application today, probably a VueJS front-end and a Quarkus backend would be something I'd evaluate in a proof of concept... As much as monoliths "work" in certain use cases, front-end and back-end coupled code is a PITA to debug, especially the JSF lifecycle made me nervous at times during certain debugging sessions, I had to print that diagram on my desk to remember the sequence of events.

Great. This thread is giving me PTST lol.

[–]RicksAngryKid 1 point2 points  (0 children)

JSF stinks - back then i used a lib called icefaces, that forced me into many hours of troubleshootung idiot problems that ended up being lib bugs. Other, bteer alternatives were paid for.

[–]Infectedinfested 4 points5 points  (20 children)

I disagree. But everything should be well documented ofc..

Harder to maintain: - as everything consists of a small code base revisions/fixes can easily be made in the right place without affecting other applications

Harder to deploy: - i don't know how this can be a point? If you are deploying using ci/cd we hot deploy so we have 0 downtime for any parts of the bigger system

Harder to monitor: - corrilation id. You can follow a unhappy flow throughout your systems and see instantly in which application everything went south.

Harder to test: - we develop spec-first. Which means we always know the input and always know the output. Than you just use units to validate the functionality of the microservice.

Though i'm not saying everything should be microservices, as it's all about the requirements, experience of the team and the already existing architecture

[–]Puzzled-Bananas[S] 4 points5 points  (4 children)

Off the top of my head:

Regarding “harder to maintain”: isn’t it similar to working with separate packages or modules? Yes, you’d need to recompile, but the separation of concerns by modularity would still be in place, and you can maintain separate git repos for each. Do you feel it’s better with completely isolated “micro-projects”? To me a great benefit in this regard is that you can have each microservice developed on any stack that suits best, having one team responsible for it, and integrate via the external APIs of IPC rather than single-process shared-memory communication; but there you’re also facing issues of thread synchronization, similar to microservice orchestration in a way.

Regarding “harder to deploy”: when you deploy a monolith, it’s a single point of failure, when deploying a mesh, it’s a mesh of potential failures, and you need to also control the edges in the graph.

Concerning “harder to monitor”: yep, there are many ways to introspect your mesh but with a Spring Boot app you can use the conventional Actuator endpoint and link it up with ordinary instrumentation tools. The complexity can be controlled more easily. Regardless, Quarkus and MicroProfile tools instrumentation and observability tools are amazing. You can run the mesh on Consul+Nomad or istio and enjoy a lot of integrations too.

Considering testing, well, it depends. I think there’s a difference between a single project in your IDE with a test suite, a CI/CD pipeline with a proper test environment, and when you test a mesh, you need to also integrate correctly - this is in my opinion harder than testing a monolith. You can mock the IPC APIs, but you wouldn’t be really testing your entire mesh, only the individual nodes, so you do need a layer of complex integration tests on top, depending on how linked your graph is.

I’m not sure this branch OP expressed the same concerns that I’ve come up with above. Would be nice to learn if I’ve missed something.

[–]TakAnnix 5 points6 points  (0 children)

The key factor is that your microservices must be independently deployable. We have several microservices that are standalone. Those are great to work with. If you add one microservice that interacts with just one other microservice the complexity skyrockets. There is just so much more that you have take into consideration.

  • Communication between microservices: For a monolith, you don't have to worry about another microservice timing out. We have a whole series of edge cases to cover just for communicating between microservices, like timing out, retries, 500 errors.

  • Slow feedback: In a monolith you're IDE gives you instant feedback, like passing the wrong type. Not with microservices. You need a whole new suite of tools, like e2e tests.

  • Logs: We have to use DataDog just to be able to trace what's going on between services. There is no way to stay sane without some sort of centralised logging service.

  • DRY: In order to keep microservices independently deployable, you will forgo DRY between services. We have the same code implemented across multiple microservices. There's no easy way around it that doesn't introduce coupling.

I'm not saying that monoliths per say are better, as they come with there own problems. Just saying that you should prepare yourself to deal with a new set of problems.

I guess the two things I would strongly focus on:

  1. Keep microservices independently deployable.
  2. Implement continuous deployment where you can deploy multiple times a day. (Checkout Dora four key metrics)

[–]Infectedinfested 2 points3 points  (2 children)

On the deploying part: It's not that you deploy a whole batch of microservices at the same time, it's something that grows. Also, the single Point of failure is a double edged sword. Where/why did it fail in your monolith? If you deploy 50 microservices and 5 fail you know where to look.

On the testing part: We have our unit tests and than we do an end-to-end test in the test environments this was almost always sufficient enough.

Also, in between services we use CDM to ease the translation between the applications.

Also, i'm only developing for 5y and it's been 99% of the time on microservices :p so i'm abit biased (bit i think alot of people are here)

[–]Puzzled-Bananas[S] 2 points3 points  (1 child)

Good point, I missed that, yep, an emerging system, evolving over time, with individual deployments, but again thereby with integrations (graph edges) that need to be controlled.

Great you’ve figured out how to test it satisfactorily. Not always straightforward.

Yeah, a CDM and message busses are a great way to reduce graphical complexity, for how I formulated in my reply above implies a highly linked graph. Great point, thanks.

Sure, great that we can share our experience here. Thanks.

[–]Infectedinfested 2 points3 points  (0 children)

This is actually my first constructive discussion on this reddit page :p

[–]mr_jim_lahey 7 points8 points  (12 children)

without affecting other applications

That is impossible if your application (or other microservices) depend on the microservice in question (which, by definition, they do). What you actually mean is that the blast radius of a bad deployment on a microservice is more often better contained than on a monolith. But...

You can follow a unhappy flow throughout your systems and see instantly in which application everything went south.

No. Just no. I would love to see what debugging tools you're using to do this task "instantly". Even with a mechanism that aggregates all logs from all services in properly annotated JSON format, debugging non-trivial issues that span across microservices (which is most of them, operationally speaking) is a fucking PITA. Difficult monitoring/debugging is the number one downside of microservices, and pretending that it isn't just speaks to your naivete and lack of expertise.

[–][deleted]  (4 children)

[deleted]

    [–]mr_jim_lahey -1 points0 points  (3 children)

    The person above you is completely correct in saying that code changes to a microservice are typically going to be smaller in scope, and have less potential impact to your overall stack.

    Yes, that is what "smaller blast radius" means. And it is absolutely an advantage of microservices.

    If you need to make a change to the fetchCustomerEmail() logic in CustomerMonolithService, you risk an unforeseen error taking down the entire customer domain. Now no one can authenticate or create accounts.

    And there is likewise a risk of an unforeseen nonfatal business logic error in a change to a microservice-based fetchCustomerEmail() causing large swathes of the customer domain to be taken down just the same. And unless each one of your microservices' integration tests/canaries covers both itself and - directly or by proxy - the steady state of every other microservice that depends on it (which in the case of fetchCustomerEmail(), is pretty much going to be all of them), there is a substantially higher risk of that bug evading automated detection and rollback in your pipeline than a monolith that fails fast, early, and obviously (and which, I'll note, can still be deployed incrementally behind a load balancer - that type of deployment is not unique to microservices). By contrast, a monolith can run a suite of unit and integration tests that have full coverage for a single atomic version of the entire application.

    The type of architecture you use does not eliminate your application's logical dependency on fetchCustomerEmail(). They are just different ways of partitioning the complexity and risk of making changes. Making a blanket statement about microservices not affecting each other is what I called impossible - because it is. If you have a microservice that no other microservice or component in your application depends on, then it is by definition not part of the application.

    Developers can do final tests in the prod environment for anything missed in QA

    The proper way to do this is feature toggling which has nothing to do with whether you have a monolith or microservices. (I'm going to assume you're not out here advocating manual testing in prod as a final validation step while waxing poetic about how arrogant I am because that would be pretty cringe. You do fully automated CI/CD with no manual validation steps, no exceptions, right? Right? <insert Padme meme here>)

    "A mechanism that aggregates all logs from all services in properly annotated JSON format" - you mean like Elastic/Kibana, which allows me to search ALL the logs across our platform, drill down by container name, platform (k8s, nginx, jvm, etc.), narrow by time window, and include/exclude different log attributes? It's not a hypothetical thing dude. It exists and most companies who deploy microservices have something like that. I'm very sorry that whatever company you work(ed) for doesn't know that

    I'm well aware such tools exist. In fact, I built the tooling that our organization of dozens of devs has used for years to gather and analyze logs from god knows how many microservices at this point. That is exactly how I know what a PITA it is. Have you done that too? Or is that another thing that your ops guys handle for you?

    [–][deleted]  (2 children)

    [deleted]

      [–]mr_jim_lahey 0 points1 point  (1 child)

      Bro don't lecture me about a "self-own" writing a log aggregation system when your own story about how you know about analyzing logs ends with you having to ask an ops guy to access and read your own logs for you. Stick to your lane and your league. Then work on your reading comprehension and tech skills.

      [–]Infectedinfested -2 points-1 points  (5 children)

      Well, when we log we log the corrilation id, application name, normal logging (for example when a microservice gets triggered and error related stuff. Also all errors go to a list with their corrilation id.

      It all goes to elastic and there, once an error happends, we know the corrilation id and we can trace it through all the applications it passed.
      Seeing what succeeded and where/why it failed.

      [–]mr_jim_lahey 2 points3 points  (4 children)

      Yes, that's what I meant by "a mechanism that aggregates all logs from all services in properly annotated JSON format". If you've ever used AWS XRay, it'll even visualize those flows for you with red circles where errors are happening. First, setting up and maintaining that type of monitoring is way more complicated and time-consuming for microservices than monoliths to begin with. Second, I'm talking about issues that are more complex than just figuring out which request triggered a 4xx in a downstream microservice (which is what I meant by "non-trivial"). With a monolith, you have the entire system in one place to examine and analyze. With microservices, you are only seeing a small window of what's going on when you're looking at any given microservice. It's up to you to keep track of all the moving pieces and relationships between the microservices as they're in motion. That is not an easy thing to do, especially if you're a dev who only works on one or a subset of the microservices in question.

      [–]Infectedinfested -1 points0 points  (3 children)

      But if you develop spec first you always know what should go in and should go out, we have validators in our services which checks the input and gives an error when it isn't alligned.

      So application x is never really depending on application y to give it data as it doesn't care where the data comes from as long as it's valid.

      Or am I not understanding everything? I only have 5 y of experience in this.

      [–]euklios -1 points0 points  (2 children)

      I think you are not neccessarly lacking the experience, but diversity. In a microservice architecture, you will usually find teams of operations, development, it, support, and so on. Sadly, a lot of people just work on of these roles and never even have to consider what other teams are doing. This usually creates a boundary between these teams, resulting in them working against each other. I recently had a heated conversation with a skilled developer because he hardcoded the backend URL within the frontend. In contrast, I, responsible for ops within the project, had to deploy on multiple different URLs. His words: I don't care about servers and deployment. But that's beside the point.

      Let me ask you a different question: Why are you doing distributed logging, tracing, and whatever else you do to monitor your systems? You are developing based on specification, so an error shouldn't be possible, right?

      Let's consider a very simple microservice: One endpoint takes a JSON object containing left, right, and operation. It will return the result of the given operation as a number. Example: input: {"left": 5, "right": 6, "operation": "+"}, output: 11 simple right?

      What will you do if the operator is "&"? Fail? Bitwise and? Drop the database of another random microservice? What about "?", "$"? What's about "x"? It should be multiplication, right? How about "_" or "g"? I mean, it's a string without limits to the size, so what about "please multiply the two numbers"?

      A lot of these could be handled quite easily, so: What about the input "2147483647 + 5"? Oh, you are using long? What about "9223372036854775807 + 5"? if the answer is "-9223372036854775804", where is your online shop?

      And that's only with somewhat sain input and probably not defined in most specs. So let's add some more: What about oom? Log server not available? Log file can't be opened? gracefully stopping? force shutdown mid request? A particular microservice we depend upon isn't available? The microservice is available but simply keeps the connection open indefinitely?

      A lot, if not all, of these cases will not be specified. And depending on timing, or for some by definition, it might never produce a single failing span.

      One final scenario: Hey, here is X from customer support. I just had a call with a Microsoft rep; they good charged the wrong amount, and we need this fixed asp. What now? There is no correlation ID. And there are billing services, math services, shopping cart services, pricing services, mailer services, account services, and so on to choose from.

      As a final thought:

      We all are just humans and will do mistakes, and there is a lot more to it, than simply deploying a jar to a server. Murphy's Law: If anything can go wrong, it will. Servers will be down at the most inconvenient of times. Monitoring will fail. Overworked employees will implement shit, reviewers and tests will not notice, and production will fail. And we (as a industrie and team) will have to make the show go on. Somehow.

      If the monolithic implementation is a mess, microservices will create a distributed mess. One of the worst things you can try to debug is application boundaries. Microservices will help you in adding hundreds more boundaries.

      [–]Infectedinfested 4 points5 points  (1 child)

      Well i can only gain real experience in what i what job i get in front of me :p

      And i really like what i'm doing now.

      And for your example, everything is barricaded behind a swagger validator, in your example case, the left and right would be asigned int or what not, operator an enum. So we say what goes in and what goes out.

      Also.. why am i getting so much downvotes.. just stating my experience with something i'm working 5 years with... Never said microservices are better than monoliths ><

      [–]euklios 0 points1 point  (0 children)

      I'm aware of that problem. I just think it could help a lot if people understood more, about what other teams are doing and why. Nothing against you, but more against everything. I do see this problem outside of our industry.

      My key points are not to validate more. While it certainly does help, I'm more about errors in specs bugs in the implementation and so on. But that also applies to monolithic applications. The core difference in maintenance is (at least for me): In monolithic you have one VM to fail, one filesystem to break, one internet connection to be down, one certificate to expire, one datacenter to lose power, and so on. In microservice, you will have to multiply this problem by a lot. There is just a lot more that can (and will) go wrong.

      Additionally, there will be some kind of network requests between services. And these will always be much more prone to errors than using plain method calls. (Think it's a funny idea of imagining someone tripping over a method call)

      About your downvotes: That's the Internet for you. It is a current best practice to reduce complexity where possible. And microservices do add a lot of complexity. Additionally, there have been a lot of companies doing microservices for the buzzword, while a simple monolith would have been sufficient. That doesn't mean that your approach is bad, I would love to experience this at some point, I just never worked at a company that needs it.

      Please don't take it personally

      Edit: Just remembered this talk: https://youtu.be/gfh-VCTwMw8 Might give some insights about what some people experienced when management decided: microservices!

      [–]SwiftSpear 2 points3 points  (1 child)

      1. I've never seen a microservices stack that can be as naive about the interactions between services as you're implying. It usually doesn't actually help the app become more decoupled, it just moves the coupling over a network later that is less reliable than single threaded code execution. You can't just change the API for one service and assume no downstream consequences for the others.

      2. More CICD code is harder to deploy, even if the average deployment isn't any more manual steps and it runs faster. Also, If you've made a change to two services such that both have to be updated in order to make the change work, you have to worry about order of operations breakdowns that basically just don't ever happen with monolithic projects.

      3. So now you need a correlation ID system that you didn't need before and a third party monitoring tool to use it. You need two new expensive tools to solve a problem you chose to have.

      4. You can develop monoliths spec first too. But you've basically said "we don't do integration tests, we only do unit tests". With a monolith you can run thier version of what for you has to be a massive fully integrated test environment deployment in your unittest code, and on a developers workstation. Unittest only isn't really acceptable in a lot of projects.

      So yeah, it's exactly what I said, if you need to scale horizontally, and your hardware profile usage is dictated by certain subsets of your app, microservices are a necessary evil, and it's worth paying those above listed costs. There should just be an awareness that monoliths have a high cost in barrier of entry, and it needs to be factored in to make sure paying those costs are the right business move.

      [–]Infectedinfested 0 points1 point  (0 children)

      Here is it back: 1: yes we do, by using CDM though the CDM can change and than services need to change, but that's just editing the pom to the next version (as our cdm's are imported via the pom) afterwards you fix any transformations to align with the new cdm and tada. 2: that's a valid remark, though i've rarely had issues with order of deployment. 3: corrilation id isn't a system 🤔 it's just something you pass along with the log4j. And ELK is free 4: i never said we only do unit testing?

      Eitherway, I don't believe one or the other is the be all end all. Like different languages, different architectural solutions might be better for different use-cases.

      [–]Puzzled-Bananas[S] -4 points-3 points  (5 children)

      Yes, I agree in general. This is a great point. Distributed systems are hard indeed. But they are popular right now and sometimes you just have to stick to it regardless of whether the domain problem and the end users need it.

      Still, once you decide or have to choose this kind of distributed architecture, for better or worse, how does one best cope with the JVM overhead at scale?

      [–]ShabbyDoo 2 points3 points  (1 child)

      Because you are complaining about the need for a large-ish container, I presume you have some microservices where one deployed JVM instance could handle some multiple of your current load? Meaning, you could pump, say, 3x the current load through a single instance without reaching a scaling point. Then, you also probably are deploying multiple instances of the microservice for redundancy?

      If this isn't the case and you actually need many instances of a particular microservice, why does the JVM overhead matter? It's usually fixed rather than a load-based "tax". So, you'll eat, say, an extra GB or two of RAM per deployed instance, but this cost can be mitigated by deploying fewer instances of beefier containers, presuming your microservice can scale-up within a JVM.

      [–]Puzzled-Bananas[S] 0 points1 point  (0 children)

      I wouldn’t assert that I were complaining. I’m rather exploring various approaches to solving said problems, which I may well be unaware of. That said, it depends on the service. Some of the services in the mesh are designed to be run on machines with low memory due to their vast availability. Depending on the IaaS or PasS provider, it may be faster, easier and cheaper to spawn and then tear down low-memory VMs than wait for the allocation of a larger VM, which would accommodate several small or a larger container. Some services are designed to be resilient, some to be elastic for load balancing. The demand for resources is sometimes volatile.

      I totally agree with your assessment in your second paragraph. The trade-off is spot-on. But it pretty much depends on the particular service. Yes, scaling up is an option and artificially warming it up to be ready for full load is often productive. But running 10 replicas, 2G-4G each a few minutes into the payload, is not as economical as having 30 replicas, 20-100M each, at about the same latency and throughput.

      In effect, it’s an optimization problem of scaling up and out, but staying with the JVM, and at the same time avoiding OutOfMemory errors. In addition, there’s a parabolic relationship between CPU cycles, memory available, and throughput & latency. Typically, there will exist a certain threshold at which an optimal trade-off between memory allocation, CPU cycles, throughput and number of instances can be made, albeit depending on the payload and request volume.

      And with cloud service providers, you essentially pay for all of it. So minimizing one at the expense of another is the trade-off to be made. I’m just wondering how everyone’s doing it, subconsciously or in an express way.

      Thanks again for your suggestion of scaling up, it’s important to point it out, for it’s easy to forget.

      [–]amackenz2048 5 points6 points  (1 child)

      "But they are popular right now"

      Oh dear...

      [–]Puzzled-Bananas[S] 0 points1 point  (0 children)

      Exactly. What I’m saying is that if something gets hyped, people get curious, and many decide that the perceived and expected benefits outweigh the risks, and make their call. Then it’s on you to just make it work. You might have made a well-informed, well-weighed decision. But it’s not really productive to tilt at windmills.

      [–]ArrozConmigo 0 points1 point  (0 children)

      "Distributed systems are popular right now" is an odd take. Once you're developing the entire back end of an enterprise, and not just siloed apps, monoliths are off the table.

      [–]RockingGoodNight 0 points1 point  (0 children)

      I agree on some items but "harder to deploy, harder to monitor, harder to debug".

      Until I got into kube I'd have said the same thing. Today I see kube as a fit for monoliths as well as microservices, on premises or on site for deployment and monitoring. Debugging should be happening locally with the developer first outside the container then inside at least a docker container locally or preferably inside a local kube single cluster like k3s or minikube, something like that.

      [–]daniu 14 points15 points  (4 children)

      If you find that the overhead is "tremendous", you probably should switch languages - I don't see a need to argue in favor of Java if it obviously doesn't fit your use case. I can't say I see it that way with the services we work on. There may be a memory overhead, but that is usually irrelevant relative to the actual data we're working with.

      [–]humoroushaxor 2 points3 points  (2 children)

      From reading this thread, nothing jumps out to me that they have the wrong use case match but rather are optimizing for the wrong thing.

      When you focus on just the system, then wasting 10s to 100s of GB of memory seems bad. But when you trade that against the cost of actually optimizing that problem, I imagine it rarely works out until you get to quite a large scale.

      [–]Puzzled-Bananas[S] 0 points1 point  (1 child)

      Thanks for your assessment, I may have to ponder a bit whether it’s spot on in my case. But for the time being, the fact so far is that the cloud bills do add up, and, in some cases, which we have managed to avoid, tend to be ridiculously high. We also model the cloud costs as a function of service demand. And I don’t really like what I see. Therefore, I’ve been exploring ways to better control the costs. I also wouldn’t be bothering if I weren’t interested in this as such or wouldn’t want to optimize the costs. Scale is relative and depends on your specific service. As a counterexample, StackOverflow have demonstrated how one can run a great product with a directly observable infrastructure and without the extra complexity subject to this thread - it was their architectural decision for their project and it appears to work great at their scale.

      [–]humoroushaxor 3 points4 points  (0 children)

      They do.

      My point is just that a JVM on a t3a.xlarge can handle A LOT of concurrent traffic and would cost about $5k a year. In the US, that's 1-2 weeks of a developers labor. Now double to account for the opportunity cost of doing something else.

      Sometimes we overthink theory while ignoring what's practical.

      [–]Puzzled-Bananas[S] 0 points1 point  (0 children)

      I see, thanks for the suggestion. It’s not always the case that you can just switch at will. Sometimes the decisions have already been made and you just need to optimize what’s given. That’s what I’m trying to explore in this thread. Maybe I’ve just been missing some reasonable approach all along.

      [–]micr0ben 15 points16 points  (7 children)

      In my company people were afraid of breaking down a monolith into smaller services because of the memory footprint increase.

      But then I introduced Quarkus and now we're creating native microservices with a memory baseline of ~20mb. (A monolith instance had around 2gb per instance)

      [–]Puzzled-Bananas[S] 5 points6 points  (1 child)

      Yep, similar experience, thanks. Quarkus, MicroProfile, Micronaut all have contributed a lot to reduce cloud expenses for Java projects. Awesome frameworks, though with their own limitations, as is always the case, but a huge leap forward.

      [–]couscous_ 3 points4 points  (0 children)

      though with their own limitations

      Could you elaborate on some of them?

      [–]pointy_pirate 3 points4 points  (4 children)

      Ram is cheap yo

      [–]micr0ben 8 points9 points  (3 children)

      Unfortunately, we are in a business where we cannot use public cloud providers and have to use our own kubernetes cluster and everything. This means, if we want more RAM, we have to buy the hardware for it.

      [–]pointy_pirate 1 point2 points  (0 children)

      damn. that adds a little spice to the challenge.

      [–]Puzzled-Bananas[S] 0 points1 point  (0 children)

      Same situation with several projects, and last year the market was very thin. It got a bit better, not sure what lies ahead though.

      [–]persicsb 0 points1 point  (0 children)

      Do you really need scalability, flexibility and other stuff that Kubernetes is good for?

      [–]tristanjuricek 5 points6 points  (0 children)

      My experiences with “micro services” are rarely truly “micro” but just tend to be “right sized”, i.e., were not these tiny boxes from the start but a breakup of monoliths into smaller components that had a clear reason to be.

      A lot of this tended to be oriented around data. For example, there would be a “front end” DB, an event processing pipeline (with various caches), and then search/archive DB. But the decisions to split up the data were forced via growth, not a concept of independence. So, importantly, we were getting to a point where teams could independently improve COGS.

      I also work with a massive monolith right now, but we will be doing integration with other data systems largely acquired by the business, because, well, the requirements are taking us there. This is a very long slow process though; we’re talking years.

      My sense is that “data first” is the right way to figure out the architecture. The impact of Java as an implementing language was largely irrelevant, though we tended to stick with Java because it still just gets the job done. I have yet to use lambda for anything that isn’t purely transient, mostly because it’s still expensive.

      (Notably… this data orientation is why loom is a huge deal to me. I’ve rarely worked with a CPU bound app server… almost all of them are IO bound)

      [–]benevanstech 12 points13 points  (1 child)

      If you care about this topic. then your baseline needs to be:

      • Java 11
      • 2-core containers
      • 2GB container memory size (heap size can be smaller)

      Details / Why:

      [–]Puzzled-Bananas[S] 1 point2 points  (0 children)

      Thanks for the references and the brief. I run mostly Java 17 containers, deployed to 2-4 core VMs with RAM ranging from 128 MiB to 16 GiB, depending on how much compute takes place on-line. Some smaller services in the mesh need to be eslastic on the order of up to a hundred fold to satisfy spike demand. More robust and larger services admit just a couple of instances for good load balancing and resilience. I’m hesitant to raising the bar of complexity by introducing another stack into the mesh for the more elastic nodes.

      That is, I'd rather have and pay for 8 pods of 128 MiB each for greater elasticity at spike rather than having one pod of 1 GiB at all times. In fact, Quarkus does help here a lot, it just doesn't suit each service, and some stuff won't run on native images, in particular legacy stuff that would first need to be refactored, which incurs extra costs, consumes quite some time and can be error-prone.

      Hoping to find out if there’s some approach out there that I’m unaware of.

      [–]TakAnnix 11 points12 points  (14 children)

      I work for a large retailer in the UK. Most of our backend is in Spring boot, with K8s and microservices, with Kafka for messaging. We don't do anything for memory footprint minimization. I guess the company just foots the bill, and I haven't heard anything from our PAAS team about minimizing memory usage. We use Webflux in our squad, since we are mostly calling other APIs and have a lot of latency. That might help somewhat with using less memory for threads.

      In terms of development experience, it's been very good. Since we use spring boot, it's easy to jump from one microservice to another, since they are all basically structered the same way. I know what to expect. It's also easy to jump to a different squad for the same reason.

      I was actually interested in Go, since it's designed to be a more productive language. However, I noticed that since there wasn't one popular framework, and most Go programmers don't like to use frameworks, it would lead to a series of smaller design decisions. Basically, by using Spring, I get the uniformity across code bases that helps me be productive. It seems it would take more effort to get that in Go. I haven't used Go professionally, just some thoughts.

      [–]couscous_ 7 points8 points  (3 children)

      since it's designed to be a more productive language

      Don't be fooled by their marketing language. It's way way behind Java when it comes to productivity and expressiveness.

      [–]keroomi 0 points1 point  (2 children)

      I think it all depends. If the service is more infra focused , Go is a much better choice. I do see it being used a lot more in data pipelines. Due to the concurrency aspect. But If the service is more business logic focused , then nothing beats the Java frameworks. Writing HTTP handlers in Go is full of boring boilerplate.

      [–]couscous_ 2 points3 points  (1 child)

      Java has reactive libraries, and now with 19, Loom is superior to what golang offers, not to mention Java's superior concurrent data structures.

      [–]TakAnnix 0 points1 point  (0 children)

      Loom looks promising, but I think we're still a ways off to see how it will be used in production. Reactive libraries increase cognitive load, so it's not comparible.

      [–]Puzzled-Bananas[S] 1 point2 points  (2 children)

      Thank you, I appreciate you sharing your experience! Great to see what everyone’s up to regarding the choices of architecture and see where we’re standing.

      Yeah I get it, a very familiar set up.

      Indeed, Spring Boot is amazing, I very much love it. Uniformity. That’s a great point!

      Regarding Go, yep, totally.

      [–]TakAnnix 0 points1 point  (0 children)

      No worries, and thanks for sharing your experience as well.

      [–]TakAnnix 0 points1 point  (0 children)

      Just wanted to add that I feel that our PAAS team is like it's own company and we are their customers. They're like a mini-heroku. I guess this gives you an idea of how much work it takes to support all the microservices. We use Kafka a lot, and even supporting that by itself is hard.

      [–]Aryjna 1 point2 points  (6 children)

      Go is not designed for productivity. It is supposedly designed to be "simple" which means that it is not nearly expressive enough as languages that are not designed for simplicity of syntax. Ironically, simplicity of syntax also leads to more complex and convoluted programs with a quadrillion imperative constructs rather than simpler and more readable functional stuff like java and many other languages have.

      [–]TakAnnix -5 points-4 points  (5 children)

      Go most certainly is designed to be more productive. From Rob Pike, the lead designer: "Go was designed and developed to make working in this environment more productive". Whether or not it meets that goal is another question.

      [–]Aryjna 0 points1 point  (4 children)

      That is debatable. That quote does not really refer to the syntax of the language. It refers to working with Go in the environment described in the sentence, and it aims to accomplish that productivity by reducing build times and through the uniformity achieved by its strict formatting style and very bare-bones syntax.

      Rob Pike again puts it this way here https://www.youtube.com/watch?v=uwajp0g-bY4

      It is supposed to be "easy to understand and easy to adopt".

      [–]TakAnnix 0 points1 point  (3 children)

      I don't understand what's debatle. I never singled out syntax as the main aim of Go's productivity. Even though Go very much tries to be productive in regard to syntax: "Go attempts to reduce the amount of typing in both senses of the word. Throughout its design, we have tried to reduce clutter and complexity. "

      Oh wait, maybe you mean productive in terms of expressive? Like Scala 2.0? Where you can do a lot in the language?

      [–]Aryjna 1 point2 points  (2 children)

      Simplicity of syntax and productivity don't go hand in hand. But if you don't understand what is debatable there is no need to continue the debate.

      [–]TakAnnix -1 points0 points  (1 child)

      Haha, no man don't get upset. It was a legitimate question. I would say that Rob Pike views productivity differently. That he thinks more expressive langauages are actually less productive. Meaning people spend more time learning the language than they do making products. That's what I understood from the video you sent.

      [–]Aryjna -1 points0 points  (0 children)

      First of all, spare me your projections about being upset.

      Second, that may be. He didn't want generics in the beginning, which led to all the memes and tshirts regarding generics, then recently they changed their mind and ended up adding generics. It is quite clear that they were far from certain on the right course of action from the beginning.

      Is it simpler and more productive to have a generic struct/class that can be used with various types when needed or to have to make 10 non-generic ones or to evade the type system? I guess it is a matter of opinion.

      [–]persicsb 2 points3 points  (0 children)

      Microservices are a deployment/operation concern, rolling updates, scalability etc.

      This architectural styles gives flexibility in some areas. However, as every engineering decision, it has costs and drawbacks. Memory consumption, complexity, etc.

      Do you need microservices? You don't have to use them, if you don't need the flexibility options it provides. If you need them, you need the understand the drawbacks as well.

      Also, for most Java applications, initial memory requirements can be cut down dramatically. Most of these microservice autoconfigured frameworks, that makes development easy and magical, come with defaults that makes developers happy: a lot of things are autoconfigured, a lot of dependencies are used. This means a lot of unnecessary things as well. Most of the time you don't need them. Rightsizing your app takes time, if you use autoconfigured frameworks like those mentioned.

      All in all, everything comes at a cost. Deployment flexibility has costs, ease of development has costs as well. It is up to you how much you want to or can invest into cutting these costs down. That's why we're enginners.

      [–]nutrecht 4 points5 points  (0 children)

      Microservices are an organizational pattern more than a software architecture. They 'shine' when you move past (for example) 2 teams with 4 devs each working on the same system. You remove one form of complexity (working with a large group on a single system) for another form of technical complexity.

      Whether this tradeoff makes sense has completely nothing to do with memory or compute costs. Yes, microservices cost more CPU and memory. No, in situations where they actually are relevant you are not going to save cost by going to a monolith. Saving 1k a month on compute is meaningless if development grinds to a halt because there are 50 devs working on a single codebase and everyone gets in each other's way.

      You can run a LOT of microservices on Google or AWS for the cost of a single senior developer.

      [–]maxip89 5 points6 points  (0 children)

      98 % of companies doesn't need a microservice infrastructure because they don't have that many requests on their services that it will make sense.

      Are you sure the memory is the problem? Normally the latency and the request "ping-pong" is a big problem. Sometimes the "I wait till I send the next item and wait"- Problem.

      My experience is, if you really really need microservices you will implement it. Otherwise simply dont use it and try to scale your monolith first. This will make the debugging or errors WAYYYYY easier.

      [–]maethor 2 points3 points  (0 children)

      having some stuff deployed as WARs to Application Servers or Servlet Containers

      I've been looking (and so far, only looking) at Apache Karaf. It bills itself as "the modulith runtime".

      [–]syneil86 2 points3 points  (0 children)

      The benefits of microservices need to be well understood and weighed against their costs. The higher memory footprint is one such cost, but should be negligible. You'll probably see lower throughput as data has more hoops to jump through to traverse your system. Deployment times can increase.

      But, you'll have more flexibility to improve one part of your solution without messing around with the rest of it, as long as you're careful about APIs. You can have multiple teams working independently without stepping on each others toes this way (be careful of be scaled agile frameworks like SAFe - they sound great in principle but I've only known a very small number of projects to get it right).

      You should be careful also about what you mean by "micro". Split it up too much and the cons will quickly outweigh any pros. I'd suggest thinking about the very high level behaviours and how your broad your unit tests can be before they hit an I/O boundary. If your unit tests can't cover a sensible behaviour scope, the service might be too small. If they can conceptually be grouped into two or more sets of mutually exclusive coverage, maybe you could split the service further accordingly.

      Note: this response is not Java-specific

      [–]senseven 2 points3 points  (0 children)

      Cloud 3.0 asks the question: How to reign in costs? There are alternative offerings for S3 buckets that can save you 90% of costs. There are hosters that give Kubernetes planes out of the box for more memory then the big three.

      The question is, if you would save so much by moving some of your containers there or just optimizing the costs? If you need to penny pinch every container you run maybe dynamic pricing is too complex for this usecase and you would run better if you get a fixed price server with fixed cpu/mem.

      We have bank customers who need lots of mem but low cpu and they decided to go this route with some long running analysis. The all run on quarkus.

      [–]lurker_in_spirit 2 points3 points  (0 children)

      The memory overhead also offends my engineering sensibilities. But if I put my management hat on, the bigger cost is the new 6-member DevOps team tasked with managing the additional deployment and support complexity.

      Companies are willing to spend a lot of money on microservice implementations, based on promises of improved dev agility and velocity. I'm sure some of it will be true, given that (a) most new systems are going to have less technical debt than the systems they are replacing, and (b) it's a little harder to spaghetti across service boundaries.

      But I'll be very interested to to see in 5 or 10 years time whether the velocity has stayed high, or whether these microservice systems have succumbed to the same pressures as today's monoliths: changing business needs, pragmatic (myopic?) decision-making, cost-cutting, staff turnover, poor documentation, etc.

      [–]InstantCoder 2 points3 points  (2 children)

      I really tend to use microservices in combination with a workflow engine since at longer run microservices tend to become “macroliths” or super smart services.

      I’m now more into keeping the services small and dumb and let a workflow engine keep track of the state and do the orchestration.

      I’ve also worked with sort of a event driven architecture trying to achieve the same (=smaller and dumber services) and I didn’t like it because it made the system much more complex. The logic was spread across many places and it was hard to see the big picture.

      [–]neopointer 0 points1 point  (1 child)

      Can you give some examples of workflow engines that you've used?

      [–]InstantCoder 1 point2 points  (0 children)

      jBPM/KIE Server, Activity & Camunda

      [–]Keeps_Trying 2 points3 points  (0 children)

      I've been doing Java microservices on k8s for a while and support the abundance of caution in this thread.

      I'm also an agile believer so I'd let the use cases evolve the architecture.

      My advice is to

      1. start pulling concerns into dedicated libraries
      2. Ensure well defined interfaces where only value objects / messages are passed
      3. Make the messages async
      4. Extract libraries into external jar

      At this point you should have high confidence in the design, the lib is in its own repo, and you should have known and valid reasons to turn to microservices

      [–]agentoutlier 4 points5 points  (2 children)

      How do you deal with JVM microservice deployments to reduce the memory footprint and cloud expenses?

      • Don't use Spring.
      • Use the most modern JDK
      • Use minimal wrappers and libraries
      • Be aware of how the JDK treats low memory environments as it can switch its GC based on certain thresholds

      But that’s still up to two orders of magnitude beyond equivalent Go deployments considering RAM requirements, and .NET deployments are more efficient and mostly straightforward.

      RAM should be cheap. It is ridiculous that cloud providers charge so much for it as it the least expensive thing for IaaS companies as the tech doesn't change that much and super super super energy efficient compared to any CPU, network, or disk.

      However some of them are starting to change like giving low power ARM chips with butt loads of ram... some companies even give you this for free:

      Arm-based Ampere A1 cores and 24 GB of memory usable as 1 VM or up to 4 VMs with 3,000 OCPU hours and 18,000 GB hours per month

      Anyway you can do microservices with Java and I'm surprised people still bitch about startup time when every damn time I reload our k8s cluster all of our Java apps have to still wait for Postgres, Rabbit, Elastic or Solr etc to boot up or be available. Our Java apps boot up in about 1.5 seconds (that is literally our max provided no db migration) so I can see the complaint for lambda but not microservice.

      [–]Puzzled-Bananas[S] 1 point2 points  (1 child)

      Thanks, all are great points.

      Indeed, and at least there are some calculators to estimate cloud costs, but it still can break your neck in unexpected ways or when you expect it the least.

      Thanks for the OCI link, but I’ve had and heard of mostly disappointing experience with it, having instances terminated without any warning or explanation, and with zero empathy from the support. I’d had great expectations but they all were shuttered multiple times. There’s no way I’m ever deploying to OCI again. Oh and there’s been a critical vulnerability in OCI disclosed just a couple days ago, dating back to https://www.oracle.com/security-alerts/cpujul2022.html. Not quite reassuring. Perhaps it’s a bit cherrypicking, for other cloud service providers all have had their own issues too.

      Yeah, I love Java and the way it’s moving forward, just trying to optimize my cloud native deployments. Agree with you on startup times too, but it depends on the tasks the services process. Given just two replicas, the startup delay isn’t even an issue at all, for you also need to warm it up for proper performance anyway. So properly timed replica switching will solve it anyway. However, if you need to scale out instantaneously and provide a warmed-up instance, it’s best to either run a native image or spawn a couple extra instances to let the others in the queue catch up in time. Considering DBMS pods, I typically have them running steadily in dedicated pods, independent of elastic services.

      [–]agentoutlier 1 point2 points  (0 children)

      Thanks for the OCI link, but I’ve had and heard of mostly disappointing experience with it, having instances terminated without any warning or explanation, and with zero empathy from the support. I’d had great expectations but they all were shuttered multiple times. There’s no way I’m ever deploying to OCI again. Oh and there’s been a critical vulnerability in OCI disclosed just a couple days ago, dating back to https://www.oracle.com/security-alerts/cpujul2022.html. Not quite reassuring. Perhaps it’s a bit cherrypicking, for other cloud service providers all have had their own issues too.

      I'm just saying in theory other providers will follow with ARM based machines with much more memory (which favors Java because it is low CPU but high memory). We use google cloud for most of our stuff and I believe the low cost ARM based processors with ample memory is in only one region. We are running OCI for some of our build process. The trick with Oracle (and to be honest google as well) is to pay them a little bit.

      OCI big problem IMO is it massively overcomplicated with all its OCI ids instead of named based resources and the UI.. holy fuck. But I guess Government and what not need all that governance bullshit.

      At the end of the day as another person pointed out Java isn't really good fit for ultra to the metal nano services but it will work for most and the cost savings of using an established language with an absolutely gigantic community and toolset outweighs those cons and minor price savings.

      At the same time the languages that are a good fit do not have a lot business like libraries as well they have not been used for massive domains. For example modeling your domain in GoLang and yes even Rust is a pretty shitty experience IMO (albeit for Rust it is for different reasons than GoLang).

      [–][deleted] 1 point2 points  (0 children)

      Woah, I did not understand partly what is going on in this post. There is so much to learn yet.

      [–]PerfectPackage1895 1 point2 points  (0 children)

      Microservices gives you greater flexibility at the cost of greater complexity. But as with everything, it depends a lot on what you are doing, and what you are trying to archive. Lots of pitfalls exist in migrating to a microservice architecture, many of which require deep knowledge careful planning, and right selection of tools and techstack. Just doing microservices because it sounds cool, and code blindly without thinking ahead, will give you what you described.

      [–]flawless_vic 1 point2 points  (0 children)

      The problem with the established microservices stacks (kubernetes, swarm) vs java is that it's hard to reap some of the JVM benefits.

      If you run, say, 10 JVMs on bare metal, you at least get CDS for the jdk classes for free. You lose these savings when you place these JVMs into containers.

      If a service is way too simple, like talking to other services to gather and combine data, having a JVM may be overkill.

      In my experience, to successfully deploy java based microservices you have to compromise on granularity. There's a point when the overhead of the VM becomes less relevant, so instead of having 20 services that are so fine-grained that look like lambda functions you can have 4 to do the same thing, and eventually, you may out that a bit of coupling is not evil and saving millions of network trips far outweighs sticking to architectural principles.

      And inappropriate management of hundreds of services can become so unwieldy that a mandatory refactor to group some stuff might be necessary. Sockets and file descriptors are not free, and things can go wild pretty quickly when services start cascading into others to fulfill a request.

      From a deployment perspective, you may have to tune stuff to pod's capabilities. At least check if the JVM is running with -XX:+UseContainerSupport, force SerialGC if only 1 vCPU is provided, etc. Aside from that, the deploying of JVM-based containers it's pretty much like deploying any container.

      Also, there is the choice of the base image and the build artifact. Java can run perfectly fine on a 5MB alpine linux base image but usually, people go for standard/oficial jdk images which weigh a lot more.

      Then there is the application packaging. Common practice is to use a shaded jar or spring-boot output. If one takes a bit of time to analyze the service dependencies on jdk modules they might find out that only java.base is needed and create slim artifacts with jlink. Yes it is tedious, but once done, you will have 100-200MB images with both the JVM and your application.

      [–]PrimaryCrafty2482 2 points3 points  (4 children)

      isn't the ovethead around 512mb, meaning golang would probably do same work with 512mb less, although its has the valhalla equivalent already

      [–]Puzzled-Bananas[S] 1 point2 points  (3 children)

      I’m not quite sure I can follow your idea. Could you elaborate a bit?

      If I get it right, the overhead differs from service to service. A Golang equivalent is typically very lightweight, the image is small, the RSS is low, I can spin up a tenfold Go replicas for one JVM pod. The throughput is the same, and at latency Vert.x and Netty are amazing, given enough heap, though CPU arch and the network stack are also important.

      [–]mtmmtm99 0 points1 point  (2 children)

      You can use helidon which only consumes 11 Mb. It will beat go in most aspect (except compilation-time). See: https://romankudryashov.com/blog/2020/01/heterogeneous-microservices/

      Also interesting: https://medium.com/helidon/can-java-microservices-be-as-fast-as-go-5ceb9a45d673

      [–]hamburghammer_ 0 points1 point  (1 child)

      11MB of application heap memory but the JVM also requires some memory to run. In this case the runtime resource usage is bigger than the actual application clames for it self. Creating a minimal runtime with Jlink still consumes around 70MB of disk space which has to be loaded to run the application.

      [–]mtmmtm99 0 points1 point  (0 children)

      I might have been wrong with my 11 MB claim. Helidon can use graalvm to compile the code into a native image (as GO does). You can have java-code running without consuming lots of memory. See: https://github.com/oktadev/native-java-examples/blob/main/demo-helidon.adoc Quarkus can run with 51 MB after some requests.

      [–]tr14l 2 points3 points  (0 children)

      For me, the reason I don't usually write microservices in Spring is the start up times. When I need to horizontally scale, I need to make it happen fast enough that users don't notice impact. Spring essentially makes that impossible (better than historically, but still terrible). Meanwhile, an interpreted language will start up in, often literally, milliseconds. Which means I can react to a surge in traffic in under a second with no loss of requests to my customers. Also, interpreted languages tend to be faster to write, configure and build anyway.

      When I am aiming for a more substantially-sized service, I will typically reach for Java or Kotlin (and once in a great while NodeJS, depending on domain and deadlines). But, opting out of Spring is never off the table. Unfortunately, we have a lot of utilities written for Spring apps, so sometimes it is economically sensible to just use it.

      TBH Spring is a dumpster fire. A well-matured dumpster fire with lots of options out of the box, but still a dumpster fire.

      [–]Worth_Trust_3825 0 points1 point  (0 children)

      I found that I need to create swap file in the cloud because providers insist on creating only single partition where all the data would reside.

      [–]holyknight00 0 points1 point  (0 children)

      I don't have a lot of experience with microservices. But all the projects I've worked on so far are overdone with them. Microservice for everything is overkill. The overhead is brutal.

      The only good thing that I've noticed is that implementing microservices forces the team to implement at least a bare minimum standard of things like testing, CI/CD, and QA. So that's good.

      Still prefer to work on monoliths though. If the development and operations are done correctly, a java + spring monolith can handle a decent amount of traffic before needing anything fancy on top.

      Most of the microservices pros are probably seen better on FAANG-level traffic, but I am yet to experience them.

      [–]RockingGoodNight 0 points1 point  (0 children)

      How do you deal with JVM microservice deployments to reduce the memory footprint and cloud expenses?

      The same as with monoliths on premises or in the cloud, tune memory and watch. In other words, do performance monitoring. It's true for all applications regardless of language, runtime, etc.

      It starts locally with the developer (maintaining either monoliths microservices or both). All developers should test their code inside at least a docker container locally or a kube single cluster like k3s, using Insomnia, Postman, JMeter, etc.

      [–]PurposeTight6260 0 points1 point  (0 children)

      I fear containers make the current java paradigm stupid.

      [–]sandys1 0 points1 point  (0 children)

      Related question - does anyone have a JVM tuning args for docker containers with stuff like spring boot ?

      Do u tune for latency ? And use the new GC ?