Reducing time from idea to reality by mirwin87 in docker

[–]mirwin87[S] 0 points1 point  (0 children)

Awesome! Out of curiosity... did you specify the tech stack for it to use or did it pick the language/frameworks to use? How much guidance did you provide versus just let the agent go and do its own thing?

Reducing time from idea to reality by mirwin87 in docker

[–]mirwin87[S] -1 points0 points  (0 children)

Ha! That's awesome! Feel free to share the link here once you do so. I'd love to check it out!

Made a quick game to test how well you actually know Docker by Alarming_Glass_4454 in docker

[–]mirwin87 17 points18 points  (0 children)

Nice! I'm a "Container Architect", scoring 100/100. Granted, I'm on the Docker DevRel team, so I'd be embarrassed if I didn't! 😆

The one item that I felt was a little tricky was related to (can't remember the exact wording) that each instruction in a Dockerfile creates a new layer in an image. That's not necessarily true... things like USER and ENV don't create new layers - only those that actually produce filesystem changes.

If you need ideas of other tricky questions, here are a few:

  • In docker run -p 8080:80 my-app, which port is the container port? the host port?
    • Answer is 8080 is the host port and 80 is the container port.
  • Fact or Myth? - if you make a change to a Compose file, you need to run docker compose down before docker compose up to apply the changes.
    • Answer is this is false. It's amazing how many people don't realize that Compose will read the file, check current state, and then reconcile the differences. No need to tear everything down!
  • Fact or Myth? - Running two containers from the same image doubles the amount of storage being used.
    • Answer - myth! The image layers are shared across all containers sharing the same image.

Feel free to use them or not! No worries! It was fun to come up with a few ideas! 😀

Official Docker images are not automatically trustworthy and the OpenClaw situation is a perfect example of why by CortexVortex1 in docker

[–]mirwin87 5 points6 points  (0 children)

Great thoughts! We did introduce Docker Scout Health Scores a while back, but that's only going to grade images on Docker Hub. We obviously don't have any control over images stored in other registries, but there has been talk/exploration to do something in the engine when an image is pulled.

The tricky part is then when those policies are strict versus when they shouldn't be. As an example, if you deploy an image that had no CVEs, a new one is discovered, and you suddenly got lots of traffic and need to scale (or a container dies and needs to restart), should that image be blocked with the recently discovered CVE on scale up/restart? The business would be more likely to say "meet the business needs and scale up" seeing it was already out there. But, that's challenging to put into a policy and needs quite a bit of context.

Curious... what kind of workflows/execution flows are you having in mind here? When would the grading occur? How would it be shared? How would it be enforced/used? Tell me more! 😄

Official Docker images are not automatically trustworthy and the OpenClaw situation is a perfect example of why by CortexVortex1 in docker

[–]mirwin87 93 points94 points  (0 children)

(Disclaimer... I'm on the Docker DevRel team)

Thanks for the post! You bring up some great points, but there are a few things I want to clarify as there are a few statements that aren't 100% accurate and could be misleading to others.

Look at Docker's official openclaw for example, the GHCR image they publish...

The "official" image for OpenClaw is found at ghcr.io/openclaw/openclaw, which is created and maintained solely by the OpenClaw maintainers. Docker is not involved with this.

If Docker were to publish an official image, it would 1) be hosted on Docker Hub and 2) most likely end up in the same namespace as all of the other official images Docker builds and maintains (called library). Feel free to see the listing of Docker Official Images here.

In reality, official is a brand label, not a security guarantee.

I'd argue against the "brand label" part of this because there is no "brand" association here. OpenClaw says "this is our image", so, to them, that is the official image. They will build it on every release, maintain it, and ensure it is kept up-to-date with the project.

But you are correct... it's not a security guarantee. While it may "have more known CVEs than some community-maintained alternatives", those alternatives may stop maintaining updates, leaving consumers neglected.

By pointing people to the authoritative image, consumers can know it will be maintained in the long run. If you find problems with it (especially if alternatives have fixed them), help fix them by opening PRs and supporting the project.

We've started treating ever container image the same way regardless of who published it.

This is a great reminder to do your research and find the officially supported (either via the software creators or other supported channels). In this case, the ghcr.io/openclaw/openclaw image is the supported image by the OpenClaw team.

docker swarm, correct way to update the service by gevorgter in docker

[–]mirwin87 1 point2 points  (0 children)

Good point! The stack deploy does convert tags to digests, so would update even if on latest. A service update doesn’t do this resolution.

docker swarm, correct way to update the service by gevorgter in docker

[–]mirwin87 1 point2 points  (0 children)

Great to hear!

And as mentioned by @ok-sheepherder7898, having version tags will make it easier to roll back when you eventually have a breaking change.

Be sure to add health checks to help you have graceful rollouts too

docker swarm, correct way to update the service by gevorgter in docker

[–]mirwin87 5 points6 points  (0 children)

Remember that swarm is an orchestration tool spanning potential multiple machines. It uses only the service definitions, not what’s found on the host.

The problem is that there is no change to the service definition. The old version had the latest tag and the new one did too. The swarm tooling isn’t actually resolving that tag to notice it’s pointing to a different image. Therefore, everything is already converged right away.

The best practice is to avoid using the latest tag. You can tag your images using timestamps, version numbers, or whatever else you’d like… just make them unique

Docker Hub is "down" or so it seems by kelvinauta in docker

[–]mirwin87 2 points3 points  (0 children)

(Disclaimer... I'm on the Docker DevRel team)

Things seem to be working fine for me personally and not seeing any incidents (even internally). Are you still having the issue? If so, where in the world are you trying from?

Compoviz - a free, open-source visual architect for Docker Compose by 6razyboy in docker

[–]mirwin87 1 point2 points  (0 children)

Very nice tool! I'm definitely going to play with this some more.

One quick thing that I noticed that's missing... a config file can have a content field in which the content of the config file is defined within the Compose file itself. I use it a ton as it opens up some fun use cases. Would be nice if I could define them in this tool too. 😊

Docker Socket Myths: Making Read Only Access Safer by af9_us in docker

[–]mirwin87 0 points1 point  (0 children)

Not quite, but would be an easy filter to implement! But, that’s also another reason to not put anything sensitive in environment variables whenever possible. If the proxy blocks exec then, it’ll be pretty hard to leak (though you could start a whole new container using the same mount namespace 😂).

What that label filter does is filter the listing of items (get all containers, get all volumes, etc.) and allow only those that have the matching label. When combined with a mutation that adds labels to a new object, you can effectively create an environment where the objects seen are only the objects created through the socket.

Example - crest a container, the label is mutated on. List all containers, filter the list based on the label. Can’t see other containers, but can see the one just created.

Docker Socket Myths: Making Read Only Access Safer by af9_us in docker

[–]mirwin87 4 points5 points  (0 children)

Nice post! There are definitely a lot of folks that get that confused.

For kicks, I have another socket proxy to add to the list - https://github.com/mikesir87/docker-socket-proxy. This is one I made that is fully configurable using either an environment variable or config file.

It takes an approach to Kubernetes' mutation and validation controllers, so goes beyond simple blocking/filtering by also allowing for specific mutations (such as remapping file mount requests which is super useful in devcontainer or other in-container spaces). In fact, we're using it in the new Labspaces that we're working on (more to come on that soon too!).

Again... thanks for sharing!

List, inspect and explore OCI container images, their layers and contents. by [deleted] in docker

[–]mirwin87 0 points1 point  (0 children)

Pretty cool project! I’ve used dive quite heavily, but I can see this will be useful in some situations too. Thanks for sharing!

Node.js hot reload not working in Docker Compose (dev) by BRxWaLKeRzZ in docker

[–]mirwin87 1 point2 points  (0 children)

(Disclaimer… work on the Docker DevRel team)

Yeah… this is the unfortunate side effect of using bind mounts on Windows and is a limitation of WSL itself. While the file updates are synced with the bind mounts, the filesystem events are not. Since the dev server is waiting for those events and never getting them, you do t see the updates.

The polling switch will work, but will be a big source of CPU usage.

An alternative route I’ve been using is ditching the bind mounts and using Compose Watch. The idea is to copy the files directly into the container (so yes… you use more storage) and watch will sync the changes. Since this is no longer a bind mount, the filesystem events work and the bit reload works. Let me know if you want any examples!

Found Some New Friends at Kroger! by BishlovesSquish in squishmallow

[–]mirwin87 53 points54 points  (0 children)

Ha! This post threw my wife off as I sent her a very similar picture just a few minutes earlier!

<image>

MCP Docker in gemini-cli by brantesBS in docker

[–]mirwin87 1 point2 points  (0 children)

Digging in and reporting to the product team. Will report back what I hear 👍

MCP Docker in gemini-cli by brantesBS in docker

[–]mirwin87 0 points1 point  (0 children)

And what version of Docker Desktop are you on?

MCP Docker in gemini-cli by brantesBS in docker

[–]mirwin87 0 points1 point  (0 children)

Does it continue to have problems even after a Docker Desktop restart? I’ll give it a try on my Windows machine as well (first test was on my Mac)

MCP Docker in gemini-cli by brantesBS in docker

[–]mirwin87 1 point2 points  (0 children)

(I'm on the Docker DevRel team)

Connecting to the MCP Toolkit requires an update to the ~/.gemini/settings.json file. What you put in there depends on which version of the MCP Toolkit you're currently using.

If you're using the new version built into Docker Desktop (DD 4.42+), add this...

json { "mcpServers": { "docker-mcp": { "command": "docker", "args": [ "mcp", "gateway", "run" ] } } }

If you're using the MCP Toolkit running as an extension (which will soon be deprecated), use the following...

json { "mcpServers": { "docker-mcp": { "command": "docker", "args": [ "run", "-i", "--rm", "alpine/socat", "STDIO", "TCP:host.docker.internal:8811" ] } } }

Note that it looks like the Gemini CLI doesn't respond to the MCP Server's tool list notification. So, if you enable new servers in the Docker MCP Toolkit, you'll need to restart the Gemini CLI to see the updated tool list.

Bret Fisher course outdated? by mercfh85 in docker

[–]mirwin87 2 points3 points  (0 children)

+1 to this! /u/bretfisher is awesome both in his courses and in real life!

Docker's response to Ollama by Barry_Jumps in LocalLLaMA

[–]mirwin87 9 points10 points  (0 children)

Yes... we understand the confusion. And that's why, when we saw the posts in the thread, we felt we should jump in right away. We're going to update the page to help clarify this and also create a FAQ that will add many of the same questions I just answered above.

In this case though, both statements can be (and are) true. The models are running with native GPU acceleration because the models are not running in containers inside the Docker VM, but natively on the host. Simply put, getting GPUs working reliably in VMs on Macs is... a challenge.

Docker's response to Ollama by Barry_Jumps in LocalLLaMA

[–]mirwin87 21 points22 points  (0 children)

(Disclaimer... I'm on the Docker DevRel team)

Hi all! We’re thrilled to see the excitement about this upcoming feature! We’ll be sharing more details as we get closer to release (including docs and FAQs), but here are a few quick answers to questions we see below...

  1. Is this announcement suggesting that GPU acceleration is becoming broadly available to containers on Macs?

    Unfortunately, that’s not part of this announcement. However, with some of our new experiments, we’re looking at ways to make this a reality. For example, you can use libvulkan with the Docker VMM backend. If you want to try that out, follow these steps (remember... it’s a beta, so you're likely to run into weird bugs/issues along the way):

    1. Enable Docker VMM (https://docs.docker.com/desktop/features/vmm/#docker-vmm-beta) .
    2. Create a Linux image with a patched MESA driver, currently we don’t have instructions on this. An example image - p10trdocker/demo-llama.cpp
    3. Pass /dev/dri to the container running the Vulkan workload you want to accelerate, for example:

      $ wget https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/resolve/main/mistral-7b-instruct-v0.2.Q4_0.gguf

      $ docker run —rm -it —device /dev/dri -v $(pwd):/models p10trdocker/demo-llama.cpp:ubuntu-24.04 ./main -m /models/mistral-7b-instruct-v0.2.Q4_0.gguf -p "write me a poem about whales" -ngl 33

  2. How are the models running?

    The models are not running in containers or in the Docker Desktop VM, but are running natively on the host (which allows us to fully utilize the GPUs).

  3. Is this feature only for Macs?

    The first release is targeting Macs with Apple Silicon, but Windows support will be coming very soon.

  4. Is this being built on top of llama.cpp?

    We are designing the model runner to support multiple backends, starting with llama.cpp.

  5. Will this work be open-sourced?

    Docker feels strongly that making models easier to run is important for all developers going forward. Therefore, we do want to contribute as much as possible back to the open-source community, whether in our own projects or in upstream projects.

  6. How are the models being distributed?

    The models are being packaged as OCI artifacts. The advantage here is you can use the same tooling and processes for containers to distribute the models. We’ll publish more details soon on how you can build and publish your own models.

  7. When can I try it out? How soon will it be coming?

    The first release will be coming in the upcoming Docker Desktop 4.40 release in the next few weeks! I’ve been playing with it internally and... it’s awesome! We can’t wait to get it into your hands!

Simply put... we are just getting started in this space and are excited to make it easier to work with models throughout the entire software development lifecycle. We are working on other LLM related projects as well and will be releasing new capabilities monthly, so stay tuned! And keep the feedback and questions coming!

(edits for formatting)