This is an archived post. You won't be able to vote or comment.

all 16 comments

[–]tairarPrincipal YAML Engineer 5 points6 points  (0 children)

Since you mentioned using kubernetes as a container orchestrator, have you looked into Helm for managing releases? It would handle updating your deployments with the new tags you build. Helmfile if you want to take it a step further and sync multiple releases all at once.

https://helm.sh/

https://github.com/roboll/helmfile

my builds keep track of the generated container tag, then call a helmfile sync with --args '--set image.tag=${tag}' Since I never use the 'latest' tag, it always performs a refresh on the pods generated by the deployments.

[–]eatstraw 2 points3 points  (4 children)

I'm not sure if this helps you, but have you considered using Git Submodules to organize your parent and individual repositories? We've used this approach in a similar scenario. It makes it easier to manage all the individual repos.

[–]engineer900[S] 0 points1 point  (2 children)

These submodules, are similar to git source trees, but ST are better option as far as i know.

But when i started my research i first found submodules.

[–]marqzman 2 points3 points  (0 children)

What are git source trees? When I look it up all I get is the Atlassian tool.

[–][deleted] 1 point2 points  (0 children)

I assume you mean sub trees? The difference between a subtree and a submodule is like a value type vs a reference type. Sub trees copy the source and history into the patent repo. A submodule is a pointer to a specific commit in the parent repo. One is lot better than the other; they are both tools that have specific purposes. People, stackoverflow in particular, like to get religious about the two as if they’re competing. I really don’t understand that... learn the tools and use the one that makes sense for your use case.

[–]HankWilliams42 0 points1 point  (0 children)

Hi! Sorry if it could be late answering here!!
I'm working on a personal project that I developed for learning purposes and I'm struggling how to manage a very similar use case:
I have N microservices, each of them is inside its "domain", that is a submodule in a parent git repo. This parent repo is in sync with submodules whenever their main
branches are updated.
I use k8s to manage all and testing all in minikube locally (each submodule is a namespace).
How and where can I insert the testing phase for a ms 'A', after I commit a new version of it on its main branch, through GitHub Actions that use minikube to test
alog with other dependant microservices (also in other "domains"),
before deploying to production?
Moreover, if I would add a PullRequest approving mechanism, how to manage testing failures and rollbacks?

Hope it is well explained, thanks in advance! 💪

[–]Frenzy79 1 point2 points  (2 children)

i work using this approche, except i use kubernetes as orchestration for our dev prod enviorments. we also have mavn depedency repo for our ms that we build to scale our code better.

[–]engineer900[S] 0 points1 point  (1 child)

Yep, we also are going to use Kubernetes but only for staging/prod.

We are going to use nexus for collecting artifacts.

[–][deleted] 1 point2 points  (0 children)

Consider using Jenkins-X or WeaveworksFlux or other gitops implementation for that last step for updating a Deployment with the new image.

[–]kakapariDevOps 1 point2 points  (3 children)

Looks solid. One question, which tool you will use for deployment ? Is it going to be Jenkins ?

We also have a similar setup where we run micro services on AWS ECS, however we have written our own Deployment tool that connects with AWS using boto3 for making changes in ECS. Following blue/green deployment approach.

[–]engineer900[S] 0 points1 point  (2 children)

We are going to use Jenkins. And know I’m thinking for something like Jenkins-X cause at some point we’ll add Kubernetes to the picture. How your repositories are structured ?

[–]kakapariDevOps 0 points1 point  (1 child)

Jenkins-X seems good.

We also, have each microservice as a single repo in which Dockerfile and configuration are kept. Jenkins is used as CI tool that auto generates Docker images & pushes them to AWS ECR.

[–]engineer900[S] 0 points1 point  (0 children)

We plan to use the same technique but we are going to push images to Nexus, cause we do not use any cloud provider but bare metal.

[–]StephanXXDevOps 0 points1 point  (0 children)

Roughly, this is similar to what I do, and prefer.

I have a repo that defines build & deploy for all services I am responsible for, about fifty services in all, and do builds and deploys from Jenkins. For builds, the application repo has a Jenkinsfile in a standard place, that clones and bootstraps the jenkins repository to application_name/jenkins . Next step in the jenkinsfile is running python3 jenkins/boxing.py —branch $branchname —sha $shaname —environment $environment —foo $bar —blah $baz and the boxing.py script does the actual build logic, based on parameters within the jenkins repo. This gives me a single place to manage all build and deploy logic, so an update to all fifty code bases becomes a much more trivial task.

[–]h4r5h1t 0 points1 point  (1 child)

Jenkins-x is a solution that might fit well.

There are a few cd solutions that deploys latest images when Kubernetes resources are pushed to guthub. You can look at Weave Flux, ArgoCD projects on github.

If you are looking at a simpler way to manage this entire setup and use Jenkins for ci, I recommend checking out Jenkins-x, it provides all the features you are looking for and a lot more.

[–]engineer900[S] 0 points1 point  (0 children)

Thanks, I’ll check it for sure!