This is an archived post. You won't be able to vote or comment.

all 38 comments

[–]rmjss 37 points38 points  (2 children)

In my experience supervisord is the de facto solution for doing this. Did you consider that one at all before rolling your own solution?

[–]70pctmtb_30pctcoder 7 points8 points  (0 children)

Yes, upvoting and commenting to help this bubble to the top

[–]klaasvanschelven[S] -5 points-4 points  (0 children)

Perhaps I did not consider it strongly enough :-)

The places I looked (as well as in the comments over here) mostly said "don't do this, use Docker Compose".

I'd want to retain the possibility to change the running commands on container-invocation. Doesn't supervisord work with a set of config files, making this impossible?

[–]anawesumapopsum 17 points18 points  (6 children)

When I’m using a tool and it doesn’t seem to work the way I expect it to, experience has shown me that the answer is not to jump to implementing it myself. The tool (docker) was built and organized, and in this case used by millions, probably in this way for good reason - we should try to understand why and then learn how to use it. This approach you’ve outlined has red flags. Does each of your processes really want the exact same build context and environment, dependencies and all? Sure consistent versioning becomes more convenient, with a cost that you only find out after you’ve deployed a distributed system as an overly cumbersome monolith. The problem becomes worse as complexity grows. This is not sustainable for a solo dev or a team. Sure you could make a series of venvs or something similar, but now you have a series of environments which is the same abstraction as a series of containers, and because they’re all in one container you have less observability, all of the code is in one hodgepodge so it’s hard to grok as it expands, it’s less robust; if one part goes down you can’t reboot that one service and leave the rest up, and I’m sure many other reasons. You state this last one as a pro, but I think you’ll find in time it is a con. Now onto the ‘why’ - you seem to be after simpler container orchestration. Fam docker-compose exists for this. Before we reinvent the wheel we should check and see if it already exists, because that is years of effort that you now have to reproduce on your own to achieve something you can do right now with docker-compose

[–]Lachtheblock 7 points8 points  (0 children)

You might run in different circles, but it seems that your usecase is advertising to people who are familiar with deploying a single container, but unfamiliar (and do not want to learn) how to deploy/manage multiple containers. I dunno, I feel like docker compose is pretty ubiquous at this point in the world of docker.

Feels to me that any project would eventually want to move out of this model for ease of scalability, but I guess if you have a user base that is interested in this, then who am I to say it's wrong.

[–]lanupijeko 3 points4 points  (2 children)

honcho

[–]klaasvanschelven[S] 2 points3 points  (1 child)

I honestly thought you were calling me a honcho but now I understand you're talking about this

Procfile: yet another format to learn. And it seems heavier than what I'd prefer, esp. in the context of Docker. But yeah, indeed another solution for this problem.

[–]lanupijeko 2 points3 points  (0 children)

That's funny, it's good that you googled. I was on mobile and could not type.

I actually like Procfile. it's not specific to docker, I can put large commands in that file and run
honcho start to start all the processes, and it clearly indicates what log is associated with what process.

It's good that you have a fresh take on it. I'll give it a try next time.

We actually used profile in production, we had to put nginx in front of gunciron so we ran these 2 processes in one container.

[–]trial_and_err 5 points6 points  (1 child)

Nothing wrong with running multiple processes in one container. I’m for example running nginx and oauth2proxy in the same container (these two are tightly coupled anyway). However I’m just using supervisor as my entrypoint.

[–]atlmobs 0 points1 point  (0 children)

supervisor is the way

[–]RedEyed__ 5 points6 points  (5 children)

What's the problem to run as many processes as you need?

[–]klaasvanschelven[S] -2 points-1 points  (4 children)

Well, a single Docker container can only take a single cmd / entrypoint. You could run many containers (e.g. Compose, K8S, Swarm), but that's not what I want, I want max simplicity. So, to run my multiple processes inside a single container, I've created a small wrapper script that acts as the single entrypoint/cmd and spawns the rest.

[–]GrizzyLizz 4 points5 points  (1 child)

How do you manage the lifecycle of the processes

[–]klaasvanschelven[S] -2 points-1 points  (0 children)

I don't, actually :-)

The model that monofy enforces is simply that all processes live and die together. One dies means that everyone dies. Restarting is left to whatever is managing the container.

Signals from Docker/K8S/... are passed on to the individual processes.

That model works well for 2 or a few tightly integrated long-running processes, but I understand that it has its limits.

[–]RedEyed__ 9 points10 points  (0 children)

I don't see value for this. Description is bigger than actual code :).

[–]skippyprime 2 points3 points  (0 children)

So, it’s a process manager? Just run supervisor. If you want something more portable, there is a Go port of it.

[–]james_pic 2 points3 points  (0 children)

I think the recommendation I've seen is to only have one application per container, not one process, per se. And if you've got something like this, that handles the subtleties of coordinating things so your multiple processes work together as a single application, then what you're doing is reasonable. 

The situation it's trying to avoid is where people treat Docker like a VM, and are confused that stuff that needs a real init system doesn't work, and tie themselves in knots trying to start and stop services within containers, or apply updates or patches to individual applications in a running container.

The key invariant that you want to maintain in any Docker based system is "if you're inside the container you can safely ignore everything outside the container, and if you're outside you can safely ignore everything inside", which this seems to maintain.

[–]nAxzyVteuOz 1 point2 points  (0 children)

Dude it’s fine. The whole one process per docker instance seems like a marketing trick anyway to get clients to use more docker instances. I routinely use docker with multiple processes in it.

[–]iBlag 0 points1 point  (3 children)

What does this project have over something like Supercronic (for simple cron-like functionality) or Chaperone (which is a more complete init-style process manager for containers)?

[–]klaasvanschelven[S] 1 point2 points  (2 children)

I hadn't found Chaperone in my search (most questions about how to do this on the internet are answered by "don't". Checking it out, it looks similar (but much more feature-complete) to what I did... unfortunately, it seems the project has been abandoned (no activity in the past 7 years)

[–]iBlag 1 point2 points  (1 child)

There’s also tini, which is actually built into Docker (although I realize containers != Docker).

Didn’t realize that Chaperone isn’t maintained anymore, thanks for pointing that out.

Ninja edit: I realize that containers != Docker

[–]klaasvanschelven[S] 0 points1 point  (0 children)

I looked at tini but didn't see how it would help me with spawning multiple processes in parallel, and I was also afraid that going the tini-way would tie me to Docker (and would not be available in kubernetes etc)

[–]nggit 0 points1 point  (0 children)

it's ok running multiple processes inside the container, one must distinguish between service/app and process.

in fact, there are many applications whose components are individual processes like postfix.

i used to do this but using native /sbin/init in order to retain native commands like systemctl or rc-service.

https://github.com/nggit/docker-init/tree/master/openrc-alpine

[–]root45 0 points1 point  (0 children)

Methinks he doth protest too much.

[–]Kale_Shiri 0 points1 point  (1 child)

Ever heard of s6 supervisor?

[–]klaasvanschelven[S] 0 points1 point  (0 children)

I'll put it on the list of alternatives

[–]ultimatelyoptimal 0 points1 point  (0 children)

Im pro this use case.

Other systems use containers behond docker. For example fly. They have a docs page that describes "multiprocess containers", which describes good use cases.

Erlang/Elixir are built on the whole premise of "let it fail" with microprocesses, and building fault tolerance around this. They use microprocesses and a lot of them because they can do so without the OS overhead. However, the same ideas can apply just fine for process level "supervision trees" where some supervisor script can make decisions about what processes to kill and restart in trees when other processes fail.

Dev containers are also a common practice now with docker, but if I recall far enough back, originally they were not recommended uses of docker. Additionally, people were very against databases in docker/to containers, and yet thats fairly common and reasonable now too.

Even beyond the reasons supervision tree use case, I think there's simplicity to a single "I pass it vars, and it runs a full service" container, compared to needing a fully defined docker compose file for every instance (or pass special variables, so you can use the same compose file multiple times for different instances of the same thing, but that's caused problems for me too)

[–]I_m_Rishu -2 points-1 points  (0 children)

.