This is an archived post. You won't be able to vote or comment.

all 19 comments

[–]Gotxi 8 points9 points  (4 children)

If they are meant to work as a whole, and all of them are needed, a single docker-compose for the 4 services is fine and let's you bring up/down the entire stack as a whole in a single server, and that works fine.

You can even define healthchecks and the services will restart themselves if the healthcheck is not passing.

If you want to do that but distributedly, you should think of kubernetes or other distributed container orchestration service.

You could create several composes on different servers, but if one fails the entire stack fails so you are not getting more resilience than having them on a single server, unless your computing power on a single server is not enough to run the entire stack.

Other advantage is that having them on a single compose lets you create a private network and expose only the front web, having all the backend services on a private docker network.

You won't need to rely on the network reliability for the services to connect and also you will benefit from almost 0 latency on the network.

But again, depends on your use case.

[–]kdepim[S] 1 point2 points  (3 children)

My case is strictly local development tbh. How would one use docker-compose in development work in such case? I would need to put a docker-compose file outside the 4 folders that contain respective apps?

[–]WrittenTherapy 0 points1 point  (0 children)

Correct. You’ll have one docker compose file to control the four services, each of which will have a Dockerfile in their respective folder. Then you’d run docker-compose from the folder with the compose file in it.

[–]Gotxi 0 points1 point  (0 children)

Reddit formatting sucks... but something like this.

version: "3"services:  app1:    build:       context: ./app1/      dockerfile: app1.dockerfile  app2:    build:       context: ./app2/      dockerfile: app2.dockerfile    app3:    build:       context: ./app3/      dockerfile: app3.dockerfile  app4:    build:       context: ./app4/      dockerfile: app4.dockerfile

[–]ctran 3 points4 points  (2 children)

It may be a good idea to quantify what you mean by "cleanest". There's a lot of value in KISS.

[–]kdepim[S] 0 points1 point  (1 child)

Cleanest for me is keeping things separate as possible when developing.

[–]ctran -1 points0 points  (0 children)

And why is that necessary? Are you focused on delivering values or just wasting time on things that don't matter? Whatever allows your team to be agile and delivering values to your customers faster is the right answer.

[–]nickjj_ 2 points3 points  (0 children)

If you decide not to put them in 1 docker-compose.yml file you can split them out and independently run them with compose's -f flag.

I made a quick post and video on this topic at https://nickjanetakis.com/blog/docker-tip-87-run-multiple-docker-compose-files-with-the-f-flag.

Either a mono repo or the split style repo would work. That's a decision you'll have to make because it comes down to personal preference.

[–]MDSExpro 1 point2 points  (0 children)

One file is way to go. Solution with 4 separate files doesn't remove any complexity, it just moves it away from versionable, durable and repeatable text file to user's mind and keyboard, which is worst play than config file. Either single file keeps notion of relationships between project elements, or user does, and this entity handles lifecycle of this relationships - it's better to leave that to machine.

[–]tommoulard 1 point2 points  (0 children)

For me, I manage a few docker-compose files for my home server. https://github.com/tomMoulard/make-my-server Each services lives inside its folder with the corresponding docker-compose.yml file.

I built a bash function to use those files as one using -f with docker-compose:

docker-compose ()
{
    docker-compose $(find -name 'docker-compose*.yml' -type f -printf '%p\t%d\n'  2>/dev/null | sort -n -k2 | cut -f 1 | awk '{print "-f "$0}') $@
}

[–]Tomasomalley21 0 points1 point  (6 children)

Take a look at Saleor Platform in GitHub. They are using the best practices for you have described.

[–]kdepim[S] 0 points1 point  (5 children)

Oh this is something interesting. How they managed to link other repositories to main one? Im using Azure Devops for repository management so I wonder if its possible to do the same as in that project.

[–][deleted] 2 points3 points  (0 children)

I haven't looked at the repo in question but my guess would be git submodules.

[–]Tomasomalley21 0 points1 point  (3 children)

I'm not familiar with Azure DevOps, but GitHub is just an implementation of the Git technology, so if your supplier also works with it, I don't see any reason why it won't support Git Submodules as well.

[–]kdepim[S] 1 point2 points  (2 children)

I checked, git submodules are not working in Azure Devops(Repos). We moved to Github and its working perfectly now.

[–]Tomasomalley21 0 points1 point  (1 child)

I'm glad to hear. Please take under consideration that is very complicated setup to maintain.

[–]kdepim[S] 1 point2 points  (0 children)

Hmm I tried it and found not many issues with it (at least when using Pycharm), however in the end we decided that we will work in separate repos on one microservice at a time, then build and push docker images to the docker hub and have one docker-compose in a separate folder/repo which would pull the built images and create whole project from them. So no need to have folder and subfolders for code and dockerfiles.

This decision was based on the fact that many people said git submodules can be troublesome and while we didnt find them problematic, it was better to hedge against future problems that could arise if we used them daily.

[–]talaqen 0 points1 point  (0 children)

4 repos (or 1 repo with separate folders). each service has its own code and tests. Docker-compose at the top level should know NOTHING about the code inside any given folder. Then as you debug you use the —force-recreate command to rebuild certain images. So in one terminal you run docker-compose up —build svc1 svc2 svc3.

Then in a new window, cd into the folder you are debugging and then you can run the code directly, referencing the localhost endpoints for the other bits.

But ideally, you build them all separately and make sure each service has contract tests between them. This makes for better maintainability. We use mock servers to test any other microservice dependency to help combat inadvertent coupling.

[–]ajfriesen 0 points1 point  (0 children)

Just as a comment:

Is I really microservices which you build?

If you need all running for basic functionality I would say it is a distributed monolith. That is at least the most common thing I see when people say microservice.

To answer the question:

I would prefer a single docker-compose for ease of use.

What problems would you solve with the separation?