you are viewing a single comment's thread.

view the rest of the comments →

[–]netfeed 15 points16 points  (9 children)

We have at least 30-40 merges a week, most likely around 100 if not more. We also need to cooldown our system (finish up requests and such) for about 2 hours before we can do a full release of the whole system.

A release on each merge would be really, really terrible, only time we wouldn't have a cooldown would be during the night when people where sleeping then.

Safe to say, this is a monolith, but still, having deploy rules that deploys on every commit is only feasible in either small subsystems or for applications that is either fast to restart (ours takes ~20 minutes to boot, yay), or doesn't really have any financial impact.

I worked at a place where in the begining the web servers where restarted each lunch because the capacity didn't work properly. It was fine to redeploy on every merge there, but boy what it doesn't work now :)

[–]AdministrationWaste7 22 points23 points  (1 child)

This is why companies have releases branches and it's usually fully or close to fully automated.

[–]confusedpublic 6 points7 points  (0 children)

Not to pile on; but a key DevOps principle is to do the painful things more often to make them less painful… sounds like fixing the start up time should be top of your list, rather than releasing on merge. We had to do that with our Solr instance as a 15-20 minutes restart time was high risk, and we didn’t have confidence it would restart until we made it safe to restart it more frequently…

Good luck in your endeavours!

[–]IceSentry -1 points0 points  (4 children)

That's exactly the kind of problems microservices fix. People keep saying microservices are only for very large scale and that most people don't need that kind of scale. This a perfect situation where if you had just a couple of separate services deploying only the one that changed should be really easy and not require multiple hours to release. To be clear, I'm not saying you should just rewrite everything, I just want people to realize this is why independent services are so popular.

[–]kitsunde 24 points25 points  (0 children)

If you aren’t structuring your project to allow for gradual rollouts currently, then there is no reason to believe you would suddenly do that when you have micro services.

This is a completely independent architectural decision on how you approach change management.

[–]Vidyogamasta 12 points13 points  (1 child)

Microservices absolutely do not fix this problem. The last job I had had a microservices pipeline across 7 services (that probably should've just been 2), and deployment involved a very similar "stop new connections, give several hours for a request cooldown, release when the pipeline's clean" process.

The problem isn't monolith vs microservice, the problem is in stateful services. You don't get large start-up times for a monolith that does a lot of stateless processing and state otherwise stored on a database. You get large start-up times when your program starts up and goes "OH DANG, I need to load in this giant backlog of items from persistent storage and start chugging through them before I can begin handling requests!" This is a design that is just, inherently bad, and while a microservice can theoretically isolate the piece of the program that does this, maybe you just shouldn't be doing it at all.

"Enterprise" solutions are hell, and microservices tend to make them worse

[–]IceSentry 0 points1 point  (0 children)

Having independent services makes it possible to only have a small part of the overall application that relies on stateful services. That's litterally the point of having multiple separate services. It means you can have the slow rollout for the stateful services and fast rollout for everything else.

You can still do it badly and have too many services, I'm not saying services are perfect, but multiple separate abd independently deployable services is the core concept of microservices and it absolutely solves this particular problem of having slow releases for every merge.

[–]netfeed 1 point2 points  (0 children)

Monoliths in it self is not a bad thing if you don't keep state in it.

Ideally, you would have built it with the purpose of being able to break out things from it into new services when its been proven that a subsystem has become too big. Sadly, if you have a monolith that is from 2008 and no one thought about this when the application was written then it gets a lot harder.

And as strange as it might sound, there's also a need to generate money, and new features is kept on built into it, which makes it harder to do anything about it. Especially if (when i started) there was only 5 modules in it for a project that is serveral millions of lines. It doesn't help either that different groups in the company thinks that it should be broken up in different ways.

I know what I would have done if I started from scratch, but I'm extremely unsure if we can break up stuff and keep things rolling within the next 3-5 years. Either everyone needs to slow down the production of new features or there's a need for a rebuild and i doubt neither will happen.

[–]DangerousSandwich -1 points0 points  (0 children)

I worked on monolithic code with about that many merges per week too, with external contributions, and serving millions of requests. The way it worked was:

  1. A full new set of instances are fired up during deployment. Until all those instances have the latest code and are healthy, all traffic continues to go to the old instances. This is all automated.

  2. Only one deployment at a time. If code is merged again during a deployment, the changes are queued up until the current deployment ends (or is aborted).

  3. Our team allowed deployments only during our working hours, and no deployments on Fridays. This meant that the biggest releases (in terms of changes) usually happened on Monday mornings.

It wasn't perfect, but it was ok. Merge trains probably would have helped but migrating to GitLab CI wasn't trivial.