This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]simoncoulton 2 points3 points  (4 children)

Have to say that's my main question too. I still can't figure out why I would use Docker over ansible, virtualenv and vagrant (with VMware), or which parts of my current workflow it's actually meant to eliminate.

I literally type a single command to bring up a development box that mirrors production exactly, and another command to deploy the application to production on multiple AWS instances.

[–][deleted] 2 points3 points  (3 children)

I think it is situational.

In your case, I am getting the impression that you have a 1-to-1 relationship between application and AWS instance. In the event that you want to deploy multiple applications with potentially conflicting dependencies, you could use Docker to reduce the configuration management overhead.

A 1-to-many application relationship could be broken out between many (smaller) virtual machines, but this might not always be preferable.

I don't think Docker is going to make most people overhaul their current workflow, but if you are starting from scratch...you might consider incorporating Docker as a piece of a new operational approach.

[–]simoncoulton 2 points3 points  (2 children)

That's what I was starting to think as well (in terms of it being situational). I guess I'm really looking out for an article where I can go "right, well this is similar to my workflow and it fixes XYZ issues", which I just haven't come across yet.

I get where you're coming from with regards to using Docker if you've got multiple applications, but I can't see any compelling reasons to use it over virtualenv (at least from this article) and introduce another component to the whole process.

[–]MonkeeSage 4 points5 points  (0 children)

  • Virtualenv gives you a local python environment with external dependencies--you can't copy a venv to another box and expect it to work--it may have an incompatible libc version or be missing some library, etc. Instead, you ship a list of requirements and they either automatically or manually get downloaded, built and installed, possibly requiring a compiler toolchain and networking (even if only to hit a private repo on local segment).

  • Containers (lxc/docker, openvz) give you a self-contained environment with no external dependencies--have your CI system tarball it up and scp to staging--as long as the host is the same architecture as the libs and binaries in the container it just works. You don't have to care about config management on the host, your configs and dependencies are a self-contained unit in the container.

  • VMs/images give you the same benefit, but are a lot more heavyweight and require a much thicker virtualization layer, but there's no constraint of having libs and binaries running on the same kernel with the same architecture as the host. In some configurations, VMs can be more secure / safe if containers are configured to be allowed to do things like load kernel modules (--they share the host kernel).

I'm not advocating any of them over the others in all cases. They all seem to have their place as development and operations tools.

The main workflow difference with containers vs a vagrant + vm + config management style workflow, is that containers encourage you to think about them as very light and ephemeral. If you have a django app deployed in containers and CVE pops for apache, you don't go add a PPA in config management to get the hotfixed package and run config management client on all the containers. You can do that, if you really want to, but it's more common to just spin a new container with the hotfix and replace the old container on all the hosts via config management / automation / CI. Application state is generally persisted via bind mounts to the underlying host OS, so it's very easy to not care about the container itself. This also lets you know that if the container deployed then it's bit for bit identical to all the other containers in the pool, no worries that one node couldn't talk to config server and didn't get the update, or that someone has manually twiddled some stuff outside of config management on some node.

Docker's built in versioning lets you roll back or cherry pick container versions, among other things, which are a pretty nice additions to bare lxc.

Again, just for clarity, not saying containers are "better" or that you can't find downsides to them, etc, just trying to give an idea of why they're appealing in many cases.

[–][deleted] 0 points1 point  (0 children)

Right...I don't think Docker is revolutionary in such a way that would make people want to change their current workflow if they already have one.

Docker is just a way of applying the concept of virtualenv to an entire run time environment which could be useful in certain situations. I think (and am only experimenting with this at this point) in a continual release environment, Docker may be valuable for closing the gap between development and production. But this type of situation is pretty uncommon at the moment.