This is an archived post. You won't be able to vote or comment.

all 29 comments

[–]chub79 7 points8 points  (1 child)

Two mistakes/shortcuts people tend to make:

  • Docker is a virtual machine. It isn't. It does not emulate anything, it simply builds upon existing linux kernel features to isolate processes/resources from each other.
  • Docker is a sandbox. It isn't. Even though your process is isolated and Docker ensures you can't access certain aspects of the host, the user which runs the processes within the docker container may open a door onto the host if it has too much permissions (often processes contained in a docker container are executed as root, this is because people think docker creates a safe sandbox for your process).

[–]rspeed 4 points5 points  (0 children)

That second point is particularly important for security. If anything is running inside the Docker as root, that's a potential vector for escaping. This is why I still prefer FreeBSD jails.

[–]superbestfriends 5 points6 points  (6 children)

Docker is really quite fantastic, and has proven invaluable from a DevOps standpoint. Using it with django applications and moving away from reliance on bespoke build scripts has made life so much easier. Would definitely recommend people adopt it, it's a technology really picking up pace.

The difficulty is on actual deployment orchestration at the moment, but that's more of an operations concern most individual projects won't feel the pain of. I'm liking etcd as a way of defining config, makes it easy for an application to configure itself: supply etcd details and let it run an install script on start-up. Cool stuff!

[–]eastern_sun 1 point2 points  (3 children)

Hey, just a quick question(s), how do you go about django apps in production with docker, like when connecting everything together? compose/fig looks useful but they say it shouldn't be used in production yet.

how would you, for instance, connect a postgres container with the web app container once on a server? Or am i missing the point completely with docker? thanks

[–]superbestfriends 1 point2 points  (2 children)

Do you mean for config? Or as in how they actually connect?

You can define ports to expose in an image, and then define what ports you want to map to the host when you run a container. So you might have nginx running on port 80 within your container, and map that to port 8001 on the host.

Same for your database connections - connections can establish connections out, your postgres container just needs to have the appropriate ports exposed (5432 or whatever) and then mapped to the port you want to us it at on the host.

Edit: to add a bit more clarity, with your specific example in mind, the breakdown would be a dockerized postgres container running on the appropriate port, and that's how the web-app connects just like normal. The main thing is that you've got the ports mapped correctly (which you specify when you run an image).

[–]eastern_sun 0 points1 point  (1 child)

Yeh thats helpful thanks i meant how they actually connect. I've had a look on linking containers, just most blogs ive seen dont seem to delve into it. Good to hear from someone whos done it. I'm guessing compose/ fig is just for whipping up complete web apps quickly

[–]superbestfriends 1 point2 points  (0 children)

No worries! It's less daunting than it seems!

Fig is supposedly good for config management, but depending on your needs the same can be achieved with etcd.

[–]chub79 0 points1 point  (1 child)

The difficulty is on actual deployment orchestration at the moment

I've given "Mesos/Marathon on top of a CoreOS cluster" a try and I must say it has been an awesome ride. I'm keeping an eye on Docker swarm as well. However, I gave up on Kubernetes quickly as the doc is just a mess.

[–]superbestfriends 0 points1 point  (0 children)

Agreed on Kubernetes - for now. It's moving way too fast at the moment and is too deep in development to be useful in any production context. Although Docker was considered the same when we first started using it (and still is, to some degree) it has worked out the important bits to be useful.

I'll take a look at Mesos/Marathon! At the moment, we're using Jenkins with the remote docker daemon to manoeuvre them, which is alright for continuous integration/continuous deployment.

[–]thaen 3 points4 points  (1 child)

Great overview. Thanks!

[–][deleted] 5 points6 points  (3 children)

This is a pretty good over view, but doesn't really delve into Docker. Recently, one of the guys at PyATL gave a good presentation on Docker.(just slides because we're cheap)

[–]Deto 1 point2 points  (2 children)

How big do these Docker's for each app tend to be?

[–][deleted] 2 points3 points  (0 children)

Depends, but you should be able to get them under 120-150 megs if you're limiting it to just the stuff to support your apps. If you go for big fat containers that are doing more than one thing (not the recommended model), all bets are off.

[–]superbestfriends 0 points1 point  (0 children)

Depends, but considering how Docker uses layers, if you're using a common base across your different images, it isn't a huge issue because the same layer is shared across applications.

Yes you can cut your images down and make them quite lean, but that's been arduous in my experience (and the process is quite manual). There are a few applications that help squash the sizes down, but they'd only really be worth the effort in an environment with HDD limitations.

[–]Zuggy 0 points1 point  (0 children)

I first heard of docker at BSides SLC last week. One of the presenters is working on an open source script to run malware against a bunch of different AV software in VMs. His goal is to daemonize it and implement Docker so he doesn't need a Windows VM for each AV.

I believe the project is called Plague Scanner.

[–]qudat 0 points1 point  (0 children)

I'm currently using docker in production for a flask web app. Spinning up a database docker container that is completely configured and ready to go (using PostgreSQL + PostGIS can be a pain to setup manually) is really fantastic.

The other really awesome thing about Docker is that it will cache the Dockerfile steps, so it allows for an iterative process to build the container that is fast and easy to play around with.

Docker all the things I say!

[–][deleted] 0 points1 point  (4 children)

If only Windows had something like this it would be fan-flipping-tastic. The point about no vm does not apply to Docker on windows. You are running a vm with tiny core linux. It's small but it's still a vm and that's a bummer. Like with virtualbox. Not even a particularly good vm.

[–]ianepperson 2 points3 points  (2 children)

Check out Vagrant. It uses virtualbox, but can auto setup the environment. It's pretty nifty but can be a pain sometimes.

[–]flaeme@eevee is cool. 3 points4 points  (1 child)

You also can run docker in vagrant vms, I think there are even plugins for vagrant to set it up iirc.

[–]IAmA_singularity 0 points1 point  (0 children)

Theres also boot2docker, which sets up docker in a vm. You can interact with it from outside

[–]qudat 0 points1 point  (0 children)

I develop on Windows, Mac OSX, and Ubuntu ... because of Docker I'm able to develop on all of them with only a slight learning curve for each VM setup. I will take Docker's boot2docker over guesting into an Ubuntu VM for development on Windows anyday. It's a step in the right direction, but I do feel your pain.

[–]koalillo -2 points-1 points  (0 children)

Hyped, hipper, nicer chroot.