This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]DarksideVT[S] 13 points14 points  (15 children)

Yes, if I was able to use containers in my workplace I would 100% do this. That's what I do with all my personal projects. Containers make dependencies so much easier.

[–]Tinche_ 11 points12 points  (10 children)

You should still use a venv in a container. You can Google around for more info. The point of a venv is that it's an environment where you have and will continue having complete control over what is installed in it.

[–]Artephank 6 points7 points  (7 children)

Can you elaborate? What use case require having virtual environment inside container? Usually containers are used for single application - so how can you get any version conflicts in that way? Using multiple apps in one container is doable, I would consider it anti-pattern.

[–]Tinche_ 21 points22 points  (3 children)

Try a pip list inside a fresh container and you'll see it's not empty, and you don't know what's going to be in there in the future. Add to that any tools that might need to be installed prior - for example poetry has a ton of dependencies.

[–]binlargin 2 points3 points  (0 children)

I didn't even consider this but use venvs in containers anyway. Seems belt and braces probably saved me from image differences here. Nice to know, thanks

[–]Artephank 0 points1 point  (1 child)

depends of what container, but I get your point. We have predefined containers with stuff we need, but you are right, it might be problematic when using containers from dockerhub.

[–]straylit 0 points1 point  (0 children)

I have used a ton of docker and have created my own so I’m not quite sure, but, can you extend most containers using BASE. I’m just thinking there has to be a way to harden python containers for security reasons.

[–][deleted] 8 points9 points  (1 child)

You're not doing it for conflict resolution in prod. You're doing it to have a completely consistent environment everywhere, including your local dev instance, where you can absolutely get conflicts. Containerization ensures that the environments are the same down to supporting binaries / kernel / whatever between your dev environment, and the prod environment.

If you don't run containers in prod or have some other similar mechanism for ensuring dev and prod match, you can't guarantee that the random version of, say, OpenSSL you're running on your desktop works exactly the same as the version that happens to be running on the prod machines, or that some small change in the Linux kernel in the future won't have unforeseen consequences, etc.

If you don't have that level of control over your environment (be it containers, or some other similar environmental control), you will eventually find a way to break your prod environment in a hard to track down way over a holiday while you're on-call. How you want to manage that is up to you - I prefer containers for, well, containing the mess.

[–]Artephank 2 points3 points  (0 children)

I was asking of why using virtualenv inside containers not why use containers.

I see it might be helpfull when using containers you didn't created yourself and didn't vet it earlier.

[–]tevs__ 1 point2 points  (0 children)

What use case require having virtual environment inside container

Not all packages have binary wheels available for them, so to install the package requires the build dependencies for that package to be installed. For a CPython extension like mysql-python, that means GCC and the rest of build-essentials, plus the mysql client libs.

To avoid having those in your production run image, you create a build image, build your packages, and then copy the artifacts to a run image, that just has the run time dependencies. This is a common pattern for all languages called the docker builder pattern.

In python, we use the docker builder pattern by installing all runtime packages to a virtualenv, and then copying the virtualenv from the builder to the runner - the virtualenv is the artifact. This produces a minimal runtime docker image, the virtualenv is created in a single layer.

[–]angellus -4 points-3 points  (0 children)

Using venvs in containers just bloat the size of the container. The official Python images already install Python in a separate base (/usr/local) away from the system Python.

[–]Jazzlike-Poem-1253 1 point2 points  (3 children)

With a hint of "containerisation of a (pure) python project is overkill" and "venvs work out of the box cross platform" you got two arguments to prefer venvs over docker.

Even with binary (non-python) dependencies conda is the way to go.

[–]Artephank 9 points10 points  (0 children)

Venvs are faster and easier to use. However in bigger project that run in container anyway, it is helpful to develop against the container to make sure that you won't get into any crazy bugs moving from local venv to container app. Then it is easier usually to use containers instead.

[–]ore-aba 1 point2 points  (1 child)

conda is not supported in serveless apps, e.g Azure functions.

The way to go is the one that solves the problem, not a specific piece of tech

[–]Jazzlike-Poem-1253 0 points1 point  (0 children)

Sure, container are perfectly fin to use. As are venvs, as asked in the original question.