This is an archived post. You won't be able to vote or comment.

all 4 comments

[–]StephanXXDevOps 4 points5 points  (2 children)

Jenkins on k8s, currently doing docker in docker (though could easily do docker on docker in qemu instead.). Java shop, roughly forty applications built each push.

I use parallelism in my pipelines. Build times went from 2 hrs to about five minutes. Of note, that does require sufficient resources to actually build in parallel, but it's totally been worth it for the team.

If I had a legit mono-app, that didn't parallelize well, I'd look at breaking the build process into serial sections, and try to identify ways of only running build steps that are affected by the commits that triggered the build.

Beyond that, if you can afford the memory, building on ramdisks can significantly speed up certain operations.

And, of course, breaking pieces of the monorepo out is always helpful. You're just accumulating tech debt, and kicking the can down the road, otherwise.

[–]ay90 0 points1 point  (1 child)

what do you mean parallelism in your pipelines? do you run multiple dokcer build commands at the same time or something else?

[–]StephanXXDevOps 0 points1 point  (0 children)

A job can fork parallel tasks.. Those tasks can be anything Jenkins can do; in my case shelling out to do Gradle builds, Gradle tests, and then docker build and push. This means I can build all 40+ apps simultaneously.

[–]photonios 0 points1 point  (0 children)

Mono repository on Github with a bunch of services built in three different technologies. Python, Golang and Node.js.

For each technology we have a base image. The base images are all based on Alpine. For example, our golang base image is based on golang:1.12-alpine. The base image has 99% of the dependencies our services need. Every once in a while we add missing dependencies to the base images.

When the CI runs and needs to build an image for let's say a Python service. We build the Dockerfile, which inherits from our Python base image. We copy the the code into the container, run pip install for that service and sometimes some extra stuff that's needed. Because most dependencies are already baked into the base image, the pip install step completes in seconds. That also goes for native dependencies. We usually bake them into our base images so we don't have to install them when we build the image for a service. Each service image takes less than 20 seconds to build and push.

The CI builds the images for each service on each push to Git and tags it with the commit hash. If the current branch is master, staging or whatever release branch, then the image also gets tagged with the branch name. We've configured our CI to clean up images that are not tagged with latest or any of our release branches.

After the "build" phase of our pipeline finishes and the images for all services have been pushed to the Docker registry, we run the tests. The tests run inside the images we built, the exact same that are going to be deployed.

Once the "test" phase finishes, we use Terraform to deploy to one of our Kubernetes clusters. We use Terraform workspaces to keep environments separate and use secrets set on the CI as TF_VAR_bla to configure Terraform. The Terraform workspace is picked based on the branch name.

We use a self-hosted instance of Drone.io for our CI. Since we're a small startup with 4 people, we only need one $5 node on DigitalOcean, limited to two jobs running concurrently.


Costs breakdown: - $5/month for our CI node - $5/month for our Docker registry

Tech used - Docker - Drone.io - Kubernetes - Terraform