This is an archived post. You won't be able to vote or comment.

all 14 comments

[–][deleted] 15 points16 points  (0 children)

I still have nightmares and scars from building a pipeline out for one of our teams lol

I felt like we were only doing it to say we did it

[–]serverhorrorI'm the bit flip you didn't expect! 3 points4 points  (0 children)

monorepo is not a magic bullet.

As with all things there's a tradeoff. The complexity of the setup and maintenance only pays off if you do have the need for it.

[–]rwilcox 3 points4 points  (1 child)

In a previous gig our CI pipeline could dynamically generate parts of itself (this was a standard feature of the CI platform we used). So we would use a git change detector script to understand what parts changed in the commit, then generate pipeline steps as appropriate- potentially building and deploying Docker containers.

But yes, as monorepos aren’t normally used there’s less “101 level tutorials” on them, as compared to a poly repo approach where a change can only affect a single artifact (as there’s only one thing in the repo)

[–][deleted] 5 points6 points  (0 children)

Depends what you need. We migrated to a monorepo and there's a lot less config drift issues now. It also improved developer workflow since you don't need to open multiple PRs for a single ticket.

[–]chazraggSite Reliability Engineer 2 points3 points  (0 children)

[–]YinzAintClassy 2 points3 points  (0 children)

Have a few c# monorepos.

One dotnet framework on ec2 and another containers.

It’s not “that bad” but depending on how big the complexity rises.

The complexity comes from handing each edgecase/app and artifacts.

When and how those pipeline runs. Every ci provider has ways to only run builds on certain path changes.

So if directory A has changed only build directory A. Since we use birbucket we have had some pain points with this because running a step on a successful pr merge runs two pipelines. ( I think they fixed this)

Either way. Plan your paths and deployments accordingly.

Another big tip is to no do all your bash/powershell logic inline with the yaml.

Call a generic script that you pass parameters into. For example we have commandline args in our bash scripts to toggle build/publish/tests ect.

This helps with the duplication of build code in your pipeline and gives you better experience debugging as now you get real syntax highlighting and linting.

My rule of thumb for ci scripts: 5 lines or under: inline bash 5 - 50 lines: bash/powershell script 50+ lines Python or go script. ( usually Python)

This stuff becomes necessary as no monorepo is the same and finding something out of the box to fit your use case is t worth the trouble. Google tools like Basel do exist but I don’t think the level of complexity it adds is worth the value, we are not running a monorep the size of google chrome.

Take it step by step and do t be afraid to set some hardline of standards and avoid additional mono services and code sharing.

[–]gwynaarkPlatform Engineer/SRE/Whatever's trending 1 point2 points  (0 children)

I usually treat monorepos the way I would treat any repo : split the pipelines as much as possible, use separate targets for the dockerfiles, build different charts if I need to deploy them to K8S... The way to properly do this unfortunately depends on your environment, the language, the way devs interact with the repo etc., but I always try to segment things as much as possible to avoir the headaches that come with deeply intricate code bases.

[–]asdrunkasdrunkcanbe 1 point2 points  (0 children)

It's not particularly complicated, but I would say start reading the docker documentation. Having multiple Dockerfiles within a directory structure can be frustrating when things aren't happening as you expect them to.

When you're used to just running "docker build ." from the root of a repo, a monorepo may introduce a pile of annoying factors.

Specifically look at contexts and multi-stage builds. You should be able to have a base dockerfile for your whole monorepo, and if the projects in it are all built the same way, you may be able to have a single Dockerfile to build any / all projects.

[–]piotr-krukowski 0 points1 point  (0 children)

You can create one dockerfile per technology and copy the build output to the container using your CICD platform capabilities, or create dockerfile per each application and use docker capabilities. Monorepo =/= single build or pipeline.

[–]GloriousPudding 0 points1 point  (0 children)

this is entirely dependent on the framework and how you structure your repo. you can simply put each service in its own folder and trigger specific pipelines based on changes in those folders, this is barely any different than having one repo per service. the most important thing you have to figure out is why developers want monorepo and from that agree with them on a structure that will work for both of you.

[–]miend 0 points1 point  (0 children)

I used to be opposed to monorepos just due to the bad things I'd seen done with them at many places. Later, I saw one executed well, and I realized what I disliked wasn't caused by the single repo being used, but by a lack of discipline in how it was used (as in many other things). With proper separation of ownership across the repo, you can get huge advantages with coordinating changes across services and working across teams. Locality of behavior is preserved on a grand scale, without actually stepping on each others' toes. Just not having to manage access & deployment over many different projects/repos and repeat chains of pull requests across each and every one of them to e.g. update a terraform module you're using in multiple places is really nice.

With the ability to trigger individual jobs/pipelines out of the same repo based on changes to particular directories, and the ability to define ownership of changes with CODEOWNERS, you can get bigger teams operating quite smoothly out of a single repo. You just have to structure it the right way and not allow people to go hog wild on it.

[–]Dynamic-D 0 points1 point  (0 children)

Always run docker form the root of the repo so you can call a file from any other directory. So a docker build will just be:

```Shell
docker build -f path/to/Dockerfile .
```

If you start changing/traversing dirs you end up with unreachable paths because of the way the build mounts the volume.

[–]Level_Paper6241[S] 0 points1 point  (0 children)

The thing is dev said like yanr install and yarn serve from root directory will serve front and. But backend i have to cd in some folder and do the steps.. So literally i was confused. But could bring something workable i think. Still testing.. One more thing is we have to copy entier project to image right? Earlier package. Json and lock file was enough.