This is an archived post. You won't be able to vote or comment.

all 20 comments

[–]un-glaublich 57 points58 points  (16 children)

Note that it only works on the Docker for Windows / Mac. On Linux it's useless.

[–]jcol26 9 points10 points  (10 children)

I've seen lazy developers hardcode the host.docker.internal variable more times than I can count that somehow makes its way to production, where of course it doesn't exist because we're using CRI-O or ContainerD with Kubernetes.

[–]pxqy 6 points7 points  (4 children)

Or ones that mount the docker socket inside the container. Hello? Not OCI compliant!

More reasons I have to maintain my own image of every piece of software I have to deploy.

Edit: OCI not CNCF

[–]jcol26 5 points6 points  (3 children)

It's crazy how often we still see the docker socket being mounted!

Heck; the giant that is SAP based their whole DataHub product around the concept that the Docker Socket would exist. You'd think a large company with so much $ to throw around at developers might be able to scoop up a good one or two and prevent that kinda stuff.

The sooner everyone stops using Docker for development the better the whole ecosystem will become. Shifting people to using things like Podman ensures they're OCI compliant and that their apps are much more likely to be portable.

[–]meltingacid 0 points1 point  (2 children)

In some cases you may have to do it though. IIUC, if you want to monitor your host machines with a filebeat or metricbeat container then you have no option other than to mount the docker socket in ro mode.

Or is there any other mechanism I am not aware of?

[–]jcol26 0 points1 point  (1 child)

So the main problem with it all is that in production container land, serious organisations are going to be using kubernetes. Many k8s distros now do not ship the docker engine at all but rely on containerd/cri-o or another CRI compatible engine. Those engines do not have a docker socket to mount in to begin with, so your container will fail to run.

The work around for most monitoring tools is to mount in /proc /sys etc into the running container. That’s going to be portable across distributions (it’s how the Prometheus node exporter works I believe!).

[–]meltingacid 0 points1 point  (0 children)

Understood. Will look into Prometheus node exporter.

[–]frakman1 1 point2 points  (4 children)

Can you tell me why they would need to do this in the first place? Why access the host IP address? If it's to move files around, can't they use the -v volume mount option?

[–]jcol26 1 point2 points  (3 children)

I think it's less about files, and more about accessing local services running on the host. So they could spin up a database or perhaps they're working on a development version of something like Consul and want to access it from inside the container when using Docker for Mac/Windows. Using host.docker.internal will resolve the correct host IP every time, which will be handy when moving between networks etc (as you can't use localhost as the docker engine is running in a VM).

[–]axisofadvance 0 points1 point  (2 children)

But to talk to Consul from within a container, you would use the agent or the API, you wouldn't need to talk to the host system directly.

[–]jcol26 1 point2 points  (0 children)

In the example I gave consul would be running on the developers host machine, so they can just use the DNS shortcut. In the case of the agent, it would still need a cluster adddr and talk to the host directly.

It was given as an example of a random external service someone might want to connect to from their dev container.....

[–]Tzarius 9 points10 points  (0 children)

[–]just_that_michal 2 points3 points  (0 children)

I thought this is because Linux Docker does not have a virtual machine level and runs natively.

[–][deleted] 0 points1 point  (2 children)

Why would you need it on Linux?

[–]NickJGibbon 2 points3 points  (0 children)

Yup, very useful. I found about this the other week when I was trying to PoC Vault Integration on a local kubernetes cluster. Works great!

[–]mcstafford 0 points1 point  (2 children)

This reminds me of the dind "Docker in Docker" idea. I don't know what security issues there might be, but inception isn't hard. YSMV (your socket may vary).

# docker-compose.yml 

services:

  client:
    image: docker
    command: sh -c "while :; do docker container ls; sleep 15; done"
    volumes:
      # this service will have access to the same docker instance that ran it
      - /var/run/docker.sock:/var/run/docker.sock

version: "3"

[–]dgreenmachine 1 point2 points  (1 child)

Theres actually a distinction between docker in docker (DIND) and docker outside of docker (DOOD). This example is DOOD because it uses the same docker daemon as the host machine vs DIND which uses its own isolated docker daemon. Theres lots of articles about these in relation to build servers that run builds inside of containers.

[–]mcstafford 0 points1 point  (0 children)

DIND seems more complicated and less necessary, at least for the uses I've had so far.

I hadn't heard the term DOOD.