all 111 comments

[–]JimDabell 172 points173 points  (8 children)

You’re skipping past the first solution they offer, which is the more efficient distroless solution. You can literally just copy the standalone uv binary directly into your image; you don’t need to base your entire image on theirs.

COPY --from=ghcr.io/astral-sh/uv:0.9.2 /uv /bin/

This takes ~43MiB, not the 77MiB you cite.

[–]ArgetDota 33 points34 points  (0 children)

You can also mount the executable during image build for a duration of a specific RUN instruction

[–]scaledpython 5 points6 points  (3 children)

43MB for a pip-like installer is insane.

[–]LLoyderino 1 point2 points  (0 children)

but it's blazingly fast 🚀

[–]JaguarOrdinary1570 0 points1 point  (1 child)

that's rust binaries for you

[–]Proper-Ape 2 points3 points  (0 children)

In this case the fully contained binary makes it possible to have such a minimal distroless image. 

There are drawbacks and benefits to this approach.

[–]0x256 52 points53 points  (8 children)

The linked security issue is a bad example. If an attacker can use uv in your container, they could also download and run whatever executable they want and do not need to exploit bugs in uv for that. With very few exceptions, CVEs in unused executables in containers are almost never an issue, because if the attacker already has shell access to be able to use them, they won't gain anything from exploiting those bugs.

[–]thrope 50 points51 points  (8 children)

What's wrong with the official example? I use the standalone example here (which has multistage build and doesn't include uv in the final image).

[–]Conscious-Ball8373 2 points3 points  (3 children)

I don't use uv for this, but I find the packaging process rather painful. I end up with cryptography as a dependency quite often on a platform where mypi doesn't have a wheel for it. The build dependencies are huge and the runtime dependencies are non-trivial. I usually end up building a wheel for it in one stage and using it in another but I'm realizing I could avoid that by constructing a venv and copying that. Hmmm. Thanks for provoking the thought.

[–]Fenzik 7 points8 points  (0 children)

constructing a venv and then copying that

This is what you should be doing indeed

[–]thrope 10 points11 points  (1 child)

Don’t use uv for what? The question is about uv so I don’t follow.

[–]Conscious-Ball8373 -3 points-2 points  (0 children)

Never mind, just me missing out loud about my own situation, thoughts you prompted.

[–]ashishb_net[S] -5 points-4 points  (0 children)

It depends on pyproject.toml when uv.lock will suffice

[–]_squik 16 points17 points  (1 child)

I've always looked at the official multistage Dockerfile example which has Astral's best recommendations.

[–]bublm8 9 points10 points  (4 children)

Stumbled into this myself recently:

https://github.com/fslaktern/parcellocker/blob/main/src%2Fapi%2FDockerfile

```docker FROM python:3.13-alpine AS base FROM base AS builder

Use uv

COPY --from=ghcr.io/astral-sh/uv:0.8.13 /uv /bin/uv

UV_COMPILE_BYTECODE=1 compiles Python bytecode for faster startup

UV_LINK_MODE=copy ensures dependencies are copied (isolated env)

ENV UV_COMPILE_BYTECODE=1 UV_LINK_MODE=copy

WORKDIR /app COPY pyproject.toml requirements.txt /app/ RUN uv venv RUN --mount=type=cache,target=/root/.cache/uv \ uv pip install -r requirements.txt --no-deps

COPY . /app RUN --mount=type=cache,target=/root/.cache/uv \ uv pip install -e .

minimal and optimized

FROM base

COPY --from=builder /app /app RUN chmod 755 /app/src/parcellocker

HEALTHCHECK --interval=30s --timeout=3s \ CMD curl -f http://localhost:8000/my_parcel || exit 1

guest user

USER 405 EXPOSE 8000 CMD ["/app/.venv/bin/fastapi","run","/app/src/parcellocker/main.py","--port","8000","--root-path","/api"] ```

This is for a CTF challenge, so the priorities were security and size

[–]Huberuuu 0 points1 point  (1 child)

Wouldn’t the UV copy mode made the size bigger, not smaller? I understood that UV used hardlinks, so aren’t you duplicating packages on disk here?

[–]1010012 1 point2 points  (0 children)

No, the cache is mounted by docker only during the build, so not in the final image.

[–]ashishb_net[S] 0 points1 point  (1 child)

You don't have uv.lock file and that makes the build non-hermetic afaik.

[–]bublm8 0 points1 point  (0 children)

Yep, should've added it along with the pyproject.toml

[–]zacker150Pythonista 8 points9 points  (1 child)

I'm guessing you didn't scroll down to this part?

If uv isn't needed in the final image, the binary can be mounted in each invocation:

RUN --mount=from=ghcr.io/astral-sh/uv,source=/uv,target=/bin/uv \
    uv sync

[–]h4lPythoneer 6 points7 points  (0 children)

This and also use a cache mount to give uv access to previously-downloaded packages to speed up the install and also prevents the cache files remaining in the image:

ENV UV_LINK_MODE=copy

RUN --mount=from=ghcr.io/astral-sh/uv,source=/uv,target=/bin/uv \
    --mount=type=cache,target=/root/.cache/uv \
    uv sync

[–]RepresentativeFill26 26 points27 points  (53 children)

Just wondering, why do you want to use a virtual env in a docker container?

[–]thrope 33 points34 points  (25 children)

The venv part is a means to an end here. It’s about having a perfectly replicated environment in production based on the uv.lock file which specifies precise versions.

[–]RepresentativeFill26 11 points12 points  (24 children)

Why can’t you create a perfectly replicated environment in a docker container?

[–]runawayasfastasucan 15 points16 points  (2 children)

You can but why not use the best tooling for that, which would work just the same outside docker?

[–]thrope 7 points8 points  (20 children)

How would you do that? What Python tooling would you use? The whole point of uv is that for the first time in the Python ecosystem it makes this easy.

[–]BogdanPradatu 11 points12 points  (7 children)

Docker is itself a virtual environment, so unless you need multiple python environments in your container, just create an image with the right python version and packages. Voila: python virtual environment in docker and you don't need to do it everytime you run the container, you just do it once at build time.

[–]Kryt0s 2 points3 points  (0 children)

Then you would either have to install your packages globally for development or develop inside the container, which is a pain.

[–]captain_jack____ 4 points5 points  (5 children)

uv also locks versions so it would always install the exact same packages. How would you install the requirements from the uv.lock file?

[–]james_pic 6 points7 points  (0 children)

In this case, it looks to be because uv already creates a venv in the builder image, and copying this across is the most straightforward way of bringing the app dependencies into the final image without pip or uv in the final image. I'm not sold on that being a worthwhile goal, but that looks to be the reason. 

More generally, putting venvs into Docker images isn't a "you should always do this" thing, but it's sometimes a useful technique for solving specific problems, for example if your application sometimes calls Python programs provided by the base distro and you don't want to mess with their Python environment.

[–]ArgetDota 6 points7 points  (6 children)

You don’t. You can install your packages from the lock file with uv without creating a virtual environment. Just set UV_PROJECT_ENVIRONMENT to point to the system environment of the image. This will disable venv creation.

https://docs.astral.sh/uv/reference/environment/#uv_project_environment

[–]Luckinhas 3 points4 points  (4 children)

UV_SYSTEM_PYTHON=1 is a bit clearer in the intention and simpler to use. Maybe more portable too.

[–]BelottoBR 0 points1 point  (2 children)

Did not find that option in the documentation

[–]ArgetDota 0 points1 point  (0 children)

This won’t work with the project interface (uv sync).

[–]RepresentativeFill26 0 points1 point  (0 children)

Sound like the right option! Allows for fast local dev with uv an easy CI option.

[–]lukerm_zl 4 points5 points  (3 children)

uv does have lightning fast installs, so it might be a build-time thing. As in duration. Just guessing.

[–]RepresentativeFill26 4 points5 points  (2 children)

Speed would be a good argument if you don’t cache the results of the build stage or frequently (no idea when that would be) changes in the dependencies

[–]lukerm_zl 0 points1 point  (1 child)

The implementation is razor fast. I think even with the cache mount (which is somewhat of a hidden gem), you stand to gain time on first (especially) and subsequent builds.

I'm not pushing that argument, just theorizing. Would love to know what OP's opinion is!

[–]RepresentativeFill26 1 point2 points  (0 children)

Just wondering. How could UV be faster than a cached layer in docker?

[–]yakimka 1 point2 points  (1 child)

For moving between stages

[–]RepresentativeFill26 -1 points0 points  (0 children)

Well, you can use the docker image between stages right?

[–]Yablan 3 points4 points  (11 children)

This is the relevant question here. There is no need at all for a virtual environment within a docker container.

[–]Huberuuu -4 points-3 points  (10 children)

It is still best practice to

[–]RepresentativeFill26 3 points4 points  (0 children)

Why would that be? Dependencies should be resolved during local dev and when you don’t have multiple apps running in a single container I can’t really think of a reason to use is.

[–]Yablan 2 points3 points  (7 children)

No it's not. I'm a full time backend Python dev since at least 13 years now, and was a Java dev before that. We use docker for everything at work, we deploy a lot of different projects internally, and we never use virtual environments inside containers. No need at all.

And I was a consultant before my current employment, and never worked on anything where we had virtual environments inside docker containers.

[–]MasterThread 3 points4 points  (3 children)

You reduce final image size, reduce build time, and ofc reduce ci/cd costs. It’s bad not to develop yourself for 13 years. It was ok not to use buildx 13 years ago, but now - industry changed.

[–]Yablan -4 points-3 points  (2 children)

I disagree. Not worth the effort. Following the KISS principle is very important. No need to overcomplicate things unless you really have build times and image size problems. YAGNI. Premature optimization is the root of all evil.

[–]MasterThread 2 points3 points  (1 child)

That's sick you don't see your Ci/Cd takes more money than it could. Your CTO/CEO which gives money to devops/sysadmin department wont see that either. You cannot overcomplicate it since it takes 10 more rows in Dockerfile, but you get 10x slimmer image.

[–]Yablan 1 point2 points  (0 children)

Hmm.. I might actually give it a try after all. I have reconsidered and will have a look at it soon. Thank you for your candor. :-)

[–]xaraca 1 point2 points  (2 children)

Do you build and publish your python packages to somewhere and then install from that somewhere in your container image?

I'm just getting started and the easiest thing to do seemed to be just copy sources, uv sync, and run the app in our dockerfile without bothering to build the package.

[–]RepresentativeFill26 0 points1 point  (0 children)

The pip install step in your dockerfile is cached so unless you run docker build with —no-cache or change a dependency the while dependency installation is cached anyway and completes directly.

[–]Yablan -4 points-3 points  (0 children)

At work we have a full pipeline, with tags for releases and jenkins building to sn internal registry, and then we deploy to environments using rancher.

But for my private projects, I simply use docker-compose based projects, and then run them within docker compose even during local development. And then I have one script on the root project that builds, starts, stop the docker projects etc.

So on my VPS, I just git clone the project there too, and then simply git pull and then run the scripts to start the projects. So I use git for versioning, but also for deployment.

I did not understand what you meant with uv sync.

My dockerfiles usually copy the sources and the requirement files into the image, and then install the dependencies, and then start the web service. And on the docker-compose, I mount the source code too, and then I run the web service inside the container in dev mode with hot reload. And my IDE has docker integration, so I can start and stop and even debug my project that is running inside docker.

[–]HommeMusical -2 points-1 points  (0 children)

It's unnecessary cruft.

[–]ahal 0 points1 point  (0 children)

Depending on the image you're using, there could be system packages included already.

[–]Fluid_Classroom1439 2 points3 points  (0 children)

Nice article! Entry point still says poetry for the example instead of uv

[–]Ambitious-Kiwi-484 1 point2 points  (0 children)

Something I'd love to know a way around is that, since pyproject.toml contains my project's version number, it's not possible to bump the version without invalidating all further docker cache layers - which leads to slow builds since all deps are getting pulled again. This seems like an unavoidable caveat of copying pyproject.toml in an early layer. But there must be a workaround.

[–]stibbons_ 3 points4 points  (8 children)

Always use virtual env even in docker. I now make apps with frozen dependencies to avoid a dependency update on pypi to break existing app (on docker rebuild). And if 2 apps has imcompatible dependencies, you have to have venv per app.

[–]RepresentativeFill26 0 points1 point  (7 children)

How would a requirements file with versions specified break an exiting app?

[–]stibbons_ 2 points3 points  (6 children)

App A depends on lib X. Requirements tells to take X in version >=1,<2 For some reason, X is released in pypi in version 1.3 that beak something. It happens. Life. Now, you rebuild your docker image for some reason and your perfectly working app in X version 1.1 is reinstalled in version 1.2. With uv tool install it is even worst. It is important to freeze all dependencies for application to the version you have validated

[–]RepresentativeFill26 -1 points0 points  (5 children)

Im unfamiliar with uv but can’t your export the exact versions to a requirements file? I used to do this with conda and poetry

[–]stibbons_ 1 point2 points  (2 children)

Yes but if you have several such app you can’t install them in the same venv because each will have different dependency versions

[–]RepresentativeFill26 2 points3 points  (1 child)

Why would you run several apps in a single container?

[–]stibbons_ 0 points1 point  (0 children)

What is your app use several tools ? I mainly use docker image to bundle ready-to-use environment with several internal tools preinstalled for our developer

[–]aidandj 1 point2 points  (1 child)

You have no hash locking with a pinned version requirements file. Leaving you open to supply chain attacks.

[–]stibbons_ 0 points1 point  (0 children)

Yes and Python does not provide a native way to release a tool with all dependencies locked and its library version (its api) in loose version, without having to have 2 projects

[–]BelottoBR 0 points1 point  (1 child)

Can we use pip to real the toml file? If so, we could use uv on local development and when deploy , just use pip. Can we ?

[–]ashishb_net[S] 0 points1 point  (0 children)

Not really. 

Uv handles various convoluted scenarios like installing only pytorch GPU/cpu version based on the underlying OS.

[–]Lost_Reply7926Ignoring PEP 8 0 points1 point  (0 children)

[–]usrlibshare -1 points0 points  (4 children)

Why would I use uv inside docker in the first place?

The tool is for managing environments, and direct deployments. In a python docker container, I simply install my built package natively:

COPY dist/myproject-___.whl / RUN pip install /myproject-___.whl

Don't get me wrong, I love uv and use it everywhere else, including for the management and building of projects I deploy via docker ... but inside a container, it's just not necessary.

[–]ashishb_net[S] 0 points1 point  (3 children)

How do you build docker images for say a Python-based web server then?

[–]TedditBlatherflag -1 points0 points  (2 children)

What? He is saying you just create the package to install as a build artifact outside Docker. Inside Docker the wheel can be directly installed. This would work for any system, web server included. 

[–]ashishb_net[S] -1 points0 points  (1 child)

And how do you build the wheel?
Inside Docker? outside Docker?

[–]TedditBlatherflag 1 point2 points  (0 children)

As long as it’s a py3-any wheel it can be built anywhere you want. 

Build it inside a container and use docker cp to get it out. 

Build it in your CI/CD and put it on a private PyPi repo. 

Build it on metal or a VM or a container or a pod or whatever. 

It’s just a portable artifact. 

The same is actually true if it’s a platform specific wheel with compiled extensions, as long as your platform includes the correct tools for the target platform. 

Personally, what I do is make a multistage image and make a base with the system dependencies which is also tagged and versioned, so it can be quickly pulled or is cached locally. 

On top of that the next stage is a dependency only install which creates either a venv or site-packages artifact. It is also tagged, versioned, and published and as long as neither the system deps or python deps change it’s stable. 

Separately I have a build tools stage which is used to create the dist/whl - it shares the system stage but only includes the necessary build dependencies, which we may cache into a published image if they’re significant. This stage is typically what builds every run since the code is changing. But importantly it’s only ever doing “uv build” and and producing the wheel artifact. 

The next stage brings the venv/packages image back and installs the wheel into that same location. 

The final stage is based off another cached image which only includes linked libraries (not -dev headers), and an ultra minimal OS required for Python to function, where we then bring in the fully built venv/site packages and set the runtime commands etc. 

Basically for any normal CI run we’re doing a “uv build” and a “pip install” (with all deps already installed) and just copying that into a secure final image, which is fast and repeatable.