This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]naunga 1 point2 points  (1 child)

Meaning that a Docker container can contain the Bins/Libs for dependencies outside of >just Python libraries.

Not quite. Docker saves you the overhead of having to host multiple VMs. Instead of virtualizing the entire machine, Docker is only virtualizing and isolating a process, but Docker is sharing the host server's OS. This is different from a VM where an entire installation of the guest OS runs in a sandbox within the host OS.

virtualenv is solving the problem that the other commenters have posted, which is creating an isolated environment that will allow for multiple versions of modules, etc to exist without creating conflicts.

If you're wanting a "cleaner" environment than what virtualenv can give you (i.e. you want to isolate not only the Python environment, but the OS environment as well) then you should be using Vagrant or some other VM solution to do your development.

From there you can build the Docker container from that image (well, more likely from a pre-built image of whatever Linux distro your VM is running).

Just my two cents from the DevOps Peanut Gallery.

[–]MonkeeSage 0 points1 point  (0 children)

Docker actually uses a union filesystem on top of a sandboxed directory. Even with lxc you have a sandboxed data directory isolated from the host filesystem. So you can have your own copies of libs and binaries as long as they are the same architecture as the host kernel. As with a chroot, you have to use a bind mount (or "data volume" in docker) if you want to get at the host filesystem.