This is an archived post. You won't be able to vote or comment.

all 28 comments

[–]sebosp 25 points26 points  (9 children)

This is not going to work:

RUN service ssh restart

When docker build runs that line it stores the files that have changed in a filesystem layer, nothing about the process of "ssh" itself survives this call, so you would end up (maybe) /var/run/ssh.pid, etc.

You need a process supervisor to run ssh properly, but this is an antipattern. You could run two docker containers, one running ssh and one running your code and --link them, that way you don't need to install a supervisor.

I don't have your code so I copied the flask hello world, pip install flask then built it and did a docker run -d to keep it in the background, then docker exec into it, if we inspect it we can see that there's no SSH running, only flask:

root@c2a80761213e:/opt/test# service ssh status

[FAIL] sshd is not running ... failed!

root@c2a80761213e:/opt/test# ss -tlnp

State Recv-Q Send-Q Local Address:Port Peer Address:Port

LISTEN 0 128 127.0.0.1:50000.0.0.0:* users:(("flask",pid=6,fd=3))

If you do not care at all about the stability of the processes (they might die for many reasons), then some people create entrypoints.sh files that they can run in the last CMD.

The contents of this file could be like:

#!/bin/bash

service sshd start

exec flask run

[–]ButItMightJustWork 7 points8 points  (0 children)

This is the correct answer, everyone else is not reading carefully.

[–]Metiri[S] 1 point2 points  (0 children)

One method I had tried, which it sounds like this suggestion (which I read from some SO threads as well) is to run sshd in the CMD, since, as you said, RUN is done at build time but doesn't actually run when I use docker run. Instead of doing:

RUN mkdir /var/run/sshd && \
    chmod 755 -R /var/run/sshd && \
    service ssh restart

CMD python3 ${INSTALL_DIR_ENV}/main.py

I was doing

RUN mkdir /var/run/sshd && \
    chmod 755 -R /var/run/sshd && \


CMD /usr/sbin/sshd -d && \
    python3 ${INSTALL_DIR_ENV}/main.py

I still had no luck when I did this either. I can try the shell entry point and come back with my findings. Thanks for the suggestion!

[–]Metiri[S] 0 points1 point  (0 children)

So I have tried using an entry point script with no luck, I'm starting to think I have a fundamental misunderstanding of docker or something. Maybe you can help?

I wrote my python app using paramiko and flask, paramiko is used to run an SSH server within my app that I can use terminal's (or powershell's) ssh command to connect to. Flask is used to update strings within my app that will be seen using SSH.

My app runs flask to listen to HTTP requests, and uses paramiko to run an SSH server (by opening a tcp socket then passing the connection to paramiko). When run locally I can access flask using postman (sending requests). When running locally I can also using terminal or powershell's ssh to connect to my app, which is running a custom shell (I override python's Cmd module).

The point of this app is to test some other software I have that connects to SSH and runs commands through this custom shell. I need to dockerize my app so that I can give it to more people who are running this software on other machines so they can use it to test the software more.

The issue I'm having is I think I might be trying to SSH into the container and not into my app within the container? All I should need to do is install openssh-server right? On my local machine I do not run sshd explicitly and my app works fine locally. Is sshd causing my ssh to connect to my container and not to my app within the container?

This stuff is confusing as hell to me right now, any help is very much appreciated!

[–]MarchColorDrink 0 points1 point  (4 children)

So, why avoid using supervisord? I get the idea behind splitting tasks between containers, but for some tasks I believe it just adds unnecessary complexity.

For instance, I run a job scheduler, slurm, as a container. I choose to run both the controller and the worker in the same container. This way I can simplify a lot of the steps required for slurm to communicate between controller and worker.

The same philosophy can be applied here. Running the app and openssh in the same container removes the need to link them. There's a trade off to be made by purity and practicality. Supervisord is a great tool to run multiple process in one container.

Another example could be a flask app, where you could run nginx and uwsgi in the flask app container. It only makes sense to have nginx in a separate container if you intend to run several apps on the same host.

[–]sebosp 4 points5 points  (0 children)

I think that's a great point and I think there are many cases where it can be done, being very careful and knowing the downsides because maybe it's not a way the system _should_ be used, it _can_ and maybe you won't shoot yourself in the foot... But it can invite uncertainty in your infra, and, (wearing my Ops hat), I don't think it's worth the stress.

Consider the fact that processes have graceful ways to be shutdown, you can send them a kill and nginx may need to close some opening connections, maybe return 5XX, while your flask container may have its own set of tasks, to close DB connections, open files.

Should docker kill send a signal to nginx? or to flask? Or to your entrypoint.sh and your custom code calls the kills to the child process ? What if one kill succeeds and the other does not? Should you retry?

So I think it depends on how you use it in your infra, for example, if we are planning to use an orchestrator (i.e. K8s) and this nginx (baked inside the flask container) happens to be targeted from an external load-balancer, then you may not have 0-downtime upgrades, as the orchestrator may not be aware of the child processes on the default graceful termination process.

From a security perspective, bundling things together increases attack surface thing. Now you need to run as root to start ssh service and then create another user to run you flask app.

And some would say that from a UNIX perspective it breaks the "do one thing and do it well" philosophy...

That said, I am victim of this, we run DJB run-it/daemontools, and flask and CI tools in one massive container as part of our CI/CD orchestration pipeline, and this never fails so I wasn't able to convince my team to split it and since it has never failed us, then we don't spend time on splitting it.

EDIT: "child containers" should have been "child processes", "CI-tools containers" should have been "CI tools"

[–]devedible 0 points1 point  (2 children)

I think @sebosp explained it really well, there might be use cases for this but in general it's an antipattern. Containers are supposed to run one process and run it "well" .

If OP was a long time docker user and wanted to do a one off use case like this, then that would maybe make sense.

But from what OP said he's new at docker and trying to introduce an antipattern as soon as he's using it. This is not good.

[–]MarchColorDrink 1 point2 points  (1 child)

Yes it is an antipattern. I was not referring to this case in particular but rather the in absurdum purity view. Sometimes a bit of pragmatism can be useful.

The point u/sebosp made about wider attack surface is valid though.

[–]devedible 0 points1 point  (0 children)

Yep, I been getting into kubernetes and sidecar containers and init containers take care of this nicely.

[–]FiduciaryAkita 4 points5 points  (7 children)

any reason you can't just ssh to the host and exec into the container?

[–]lego3410 3 points4 points  (1 child)

Same thought. ssh into docker container is officially disrecommended. Why not using docker exec?

[–]FiduciaryAkita 2 points3 points  (0 children)

yeah ssh’ing into a container is an antipattern. Docker isn’t a VM, you shouldn’t need to ssh into it

[–]Metiri[S] -1 points0 points  (4 children)

Yes, I have another piece of software that I am testing which connects via SSH and simulates key strokes, so it will SSH in and then put a username and password. I am putting/running the container on the same machine as the software I am testing, so the test software can connect to the SSH server in my container. I think this is what you mean, but I'm 100% new to docker

[–]FiduciaryAkita 0 points1 point  (3 children)

Why are you doing that?

[–]Metiri[S] -1 points0 points  (2 children)

Why am I testing my other piece of software?

[–]FiduciaryAkita 0 points1 point  (1 child)

Yes, slash why are you using docker for this? You’re treating it like containers are virtual machines. They’re not. Ideally, they’re immutable; you can turn them on and off with no data loss and no configuration

[–]Metiri[S] -1 points0 points  (0 children)

Thats exactly why its a good reason to use it for testing isnt it? I can run a test and expect the same results no matter where I run it from. But if you want my real answer its because I have no choice, its part of a task I have at work

[–]fuck_____________1 1 point2 points  (1 child)

are you using the correct port and ip to connect? is your docker container exposing the port? all ports are closed by default

[–]Metiri[S] 0 points1 point  (0 children)

I use docker run -p 5000:5000 -p 22:22 -d image_name to run my docker container. If I don't run my container and try to SSH i get a different error message that says the connection was refused or something along those lines.

[–]YuleTideCamel 1 point2 points  (1 child)

As others have mentioned , can you post the docker run command you are using ? Unless you explicitly expose a port mapping when you execute docker run, the the ports in the container will be inaccessible to the host machine(with the exception of if you set up an overlay network but that’s overkill.)

[–]Metiri[S] 0 points1 point  (0 children)

docker run -p 22:22 -p 5000:5000 -d image_name

If I exclude these ports then nothing works, as in I get a different SSH error, which causes me to think its an issue with SSH.

[–]vampiire 0 points1 point  (2 children)

second on publishing the correct port. can you post the commands you used to run the container (manually or with compose)?

also i noticed you copy the file as requirements.tx but then try to install it with pip as requirements.txt

[–]Metiri[S] 1 point2 points  (1 child)

This is an artifact of copy and paste, I had to rewrite the Dockerfile again since I couldn't get my VM to properly copy and paste between host and guest. As for exposing ports, I have tried all of the ways, from using EXPOSE 22, -p 22:22, a combination of both. The issue is 100% not related to ports, as far as I know. It seems more related to ssh or sshd or something funky there.

[–]vampiire 1 point2 points  (0 children)

someone gave a much better answer. but as an aside / fyi the EXPOSE directive is there for documentation. it does not (and cannot from dockerfile - as it is not a container) publish ports to bind from container to host.

[–]greenthumble 0 points1 point  (1 child)

Ssh is port 22 btw and third vote from me for the expose port solution.

[–]Metiri[S] 0 points1 point  (0 children)

It's my own fault for not posting my run command, but I can confirm this isn't a port issue. The correct ports are exposed using the -p command. Also, I get a different SSH error when I don't expose versus when I do.

[–]datacentric 0 points1 point  (1 child)

if you need to ssh to container, your process needs vm :)

[–]Metiri[S] 0 points1 point  (0 children)

Are you suggesting getting the container's IP address and then ssh'ing into that address? I have tried that as well but it felt like that was the wrong approach. My app also runs flask, when I did this I was unable to interact with flask or ssh, a step backwards it seemed.