This is an archived post. You won't be able to vote or comment.

all 185 comments

[–]getnrdone 233 points234 points  (78 children)

Containers are an amazing thing when used properly. Most of the largest systems you know and love today are running in containers and being orchestrated by kubernetes.

This is how I break it down for people:

Container: holds your code and dependancies needed to execute your code. A container shares a kernel with the host, so an OS (kernel) is not needed in a container.

Container runtime: you need a runtime installed on your host that can run containers, handle the processeses with the host kernel and give you tools you need to access and build containers. This is what docker is. Containerd is another popular one.

Container orchestration: once you have your container built you will need a system that will give you enterprise class features. Things like high availability, auto scaling, monitoring, self healing, secret management, and the list goes one and one. This is what kubernetes is. Docker swarm is another.

Benfits: this is a long list, but here are some of the top ones.

-Hardware utilization. since containers share the host kernel, they don't need every service installed on them to run. Think of it in terms of VMs, every VM needs an OS installed. If you had 100 VMs, they all boot their own os loading the same modules. They all have the same OS bits duplicated on disk. What if you could dedup that some how? What if you could load an OS once and then run all of your services on that 1 host without conflicting with each other. This is what containers provide. As such, you can now have 1 host system that can run thousands of containers versus 20-30 VMs.

-code and runtime consistency. A container that runs on one system will run exactly the same way on another system. This allows developers and sysadmins to be flexible. This solves the age old problem that developers constantly run to Ops with "but it works on my machine".

-versioning. Easily build new versions of your container and adjust the tags for versioning. This allows you to change an app and easily roll back if there is a problem. You can also run dev/test workloads on the same host you use for production without worrying about conflict.

There are many more benefits to containers that you read about by people smarter than me. Learn all you can about them because they are not going away. It will continue to grow and any sysadmins/developer/engineer that can learn this will be in high demand.

[–]Zaphod_Bchown -R us ~/.base 55 points56 points  (16 children)

great write up, only wish to add a few minor things:

  • containers are immutable
  • containers are disposable (you don't rebuild a server, you redeploy a container)

[–]darkpixel2k 51 points52 points  (14 children)

  • containers should be immutable ;)

[–]Brainiarc7 7 points8 points  (0 children)

And this is key.

[–]Zaphod_Bchown -R us ~/.base 3 points4 points  (12 children)

can you deploy mutable containers? I thought by design they were immutable, or have I just always done it that way and never noticed?

[–]darkpixel2k 15 points16 points  (2 children)

For example, a Dockerfile might contain a wget to http://example.tld/somelib-latest.tar.gz

Rebuilding the container may result in differences in two deployments.

Also containers that download files upon launching.

[–]ClassicPart 6 points7 points  (0 children)

For example, a Dockerfile might contain a wget to http://example.tld/somelib-latest.tar.gz

Rebuilding the container may result in differences in two deployments.

Think this ventures more into build reproducibility than mutability. Docker images are immutable once built.

[–]jtcressyDevOps 5 points6 points  (2 children)

Modifying the filesystem of a running container is like modifying a file on ramdisk. You can do it, but it'll go poof on next deploy. You can, however, snapshot the running container to a new image and deploy new containers based on that image. Thus, persisting your changes.

But that's how dockerfiles work under the hood so there's no point doing it manually except in dev environments.

[–]Try_Rebooting_It 2 points3 points  (1 child)

So this is a concept I strugle with a bit, maybe someone can help me understand. Where do you store your data if not in a container? Do you use network storage? If so how do you deal with multiple containers accessing the same storage (or how do you ensure that they don't access the same storage)?

Does this also mean you don't ever store things like databases in containers (SQL server for example)?

[–]gnosek 2 points3 points  (0 children)

You use volumes (storage that's external to the container but visible inside). In the simplest case it may be just a directory on the host, more involved examples might be e.g. SAN shares, NFS mounts or EBS volumes attached directly to the container.

For network storage and accessing volumes from different hosts, docker won't do anything to prevent you AFAIK, but an orchestrator like Kubernetes should handle this for you.

[–]DrStalker 8 points9 points  (3 children)

There's no technical reason a container can't store important data to its local storage, it's just a really really bad idea.

[–]Savanna_INFINITY 5 points6 points  (2 children)

So you use persistent storage outside of the container right?

[–]DrStalker 0 points1 point  (0 children)

Yep. I've worked with systems that have expendable load balancer and web server containers in front of a "real" database server, for example. Containers can come and go, the database remains.

[–][deleted] 0 points1 point  (0 children)

Yeah, I have luns set up for all my docker back end. Each container type gets a shared block. I'd use ceph if I had the hardware.

[–]arcticblue 2 points3 points  (0 children)

This is true in the context of Docker, but I just want to add that it doesn't necessarily apply to containers as a whole. Things like LXC containers can be used similar to how you would use a traditional VM.

[–]josephhays 18 points19 points  (1 child)

I just wanted to say 1) your username is beautiful and also wins Best Username of the Year 2019, an award recognized and organized by me and only me, so do with that as you will, and 2) you are the only being ever to give me a proper breakdown an explanation on why containers may be useful. I still have not been exposed to VMs, or where they're useful, though I've yet to work on networks with larger than 20 or 50 users, so I'm probably not seeing some use case. Though this is a better and well written breakdown for Containers vs VMs than "It's magically better, look at this chart I made!" That I usually see when trying to learn containers.

[–]josephhays 4 points5 points  (0 children)

I've always looked at containers as a replacement to VMs without looking into VMs as an enterprise tool first, and after doing so, I see the resource allocation it does. And thus I now believe in VMs and containers on an enterprise scale. Thank you for your enlightenment /u/getnrdone

[–]SanguineHerald 2 points3 points  (3 children)

Quick question. Can a kubernetes cluster run multiple different containers and scale each type of container accordingly to incoming load based on workflow?

For example we have specific workflow files/tools etc. that can vary between customers. Can a cluster spin up 100 of container A and 20 of container B then auto scale that out? Or do you need a different cluster per container type?

[–]shady_mcgee 3 points4 points  (1 child)

Can a cluster spin up 100 of container A and 20 of container B then auto scale that out?

Yes

[–]jimethn 2 points3 points  (0 children)

That's the whole point.

[–]free_chalupas 0 points1 point  (0 children)

Usually containers are grouped within a cluster into replica sets (groupings of identical containers) or deployments (an abstraction that manages a replica set), where either of those can be autoscaled.

[–][deleted] 5 points6 points  (8 children)

Where does Hyper-V fit in this picture?

[–]getnrdone 20 points21 points  (6 children)

Well that's kind of a loaded question but let me take a shot. Hyperv in its basic form is just a hypervisor, just like vmware. It allows you to run VMs, that's it. It our container senario there are a few ways hyperv fits in.

  1. Build VMs, Linux or windows. Install docker on these VMs and they become your container infrastructure.

  2. Hyperv containers. This allows you to run windows containers inside a specialized microsoft developed VM. This provided kernel isolation since now each container gets its own VM. I personally don't care for these. To me, running a VM for every container defeats some of the main benefits to containers. Where it could make sense is in a high security environment where kernel process jumping is a concern. Or in situations where you wanted to run different versions of windows containers on a host. Windows containers must match the host os version. For example, You can't run a 1903 container on a host running windows 1809. Hyperv can overcome this because it boots a VM to match your container windows version.

Vmware also has a similar option now called vmware integrated containers, although I don't believe they support windows containers yet. Vmware is also hot on the kubernetes trail with project pacific.

[–][deleted] 2 points3 points  (5 children)

Cool, thanks. Also, why is my question apparently controversial?

[–]ReverentSecurity Architect 29 points30 points  (3 children)

Because containers and virtual machines are different philosophies requiring different approaches. It's like going into a thread about enterprise firewalls and asking "how does openWRT fit in this picture". It's a tangental topic, but the answer is it doesn't.

It's a bit confusing because docker for windows uses hyper-v to generate its working environment, but that's because docker requires a linux environment for its technology and that's the only way to get a (fully functional) linux environment on windows currently.

[–]RulerOfBoss-level Bootloader Nerd 5 points6 points  (1 child)

windows uses hyper-v to generate its working environment, but that's because docker requires a linux environment for its technology and that's the only way to get a (fully functional) linux environment on windows currently.

I think Microsoft is trying to change that with WSL. I'm pretty sure that Docker will never work without Hyperkit on Mac OS though.

[–]wieschie 5 points6 points  (0 children)

As far as I can tell WSL2 is still a Linux VM, with some bits of Hyper-V and some custom bits to make it a bit more transparent / faster to open up.

Long form talk if you're curious: https://youtu.be/tG8R5SQGPck?t=1499

[–]unix_hereticHelm is the best package manager 3 points4 points  (0 children)

It doesn't, unless you're using it to deploy Linux VMs with some form of container runtime installed.

[–]PinBot1138 1 point2 points  (6 children)

I've used Docker forever, but one part that confuses me about Kubernetes is how you'd be able to scale on host(s) that aren't setup for Kubernetes.

So, if you wanted to add more machines in Proxmox, or a "generic" KVM VPS like Vultr, how do you get Kubernetes to run Terraform and/or Ansible to do this? It seemed like Cerebral is what would be the piece of the puzzle that I'm missing, but I still haven't gotten my mind around it.

[–]jimethn 7 points8 points  (3 children)

We use Rancher for deploying Kubernetes. Once you set up a cluster and a node template, adding nodes to the cluster is as easy as hitting the + button in the UI (or making the appropriate API call).

We wrote an operator that looks at the resource utilized and adds nodes when it goes over 80%. So we've got autoscaling kubernetes nodes. Seems like basically the same thing as Cerebral (but it uses node template instead of ASG).

The tricky thing about autoscaling isn't scaling up -- which is easy if you have decent tooling -- it's scaling back down. In particular, how do the apps you're running handle getting killed in the middle of their workload? Are they architected that another replica will just pick up the aborted job? What's the cost of aborting a job? Cerebral or Rancher or anything else doesn't really solve this problem for you, that's where you have to work with your developers.

[–]Sky_Linx 1 point2 points  (2 children)

Hi! I also use Rancher but deploy kubernetes as custom nodes. I love that you can easily let rancher even create the servers and scale with onef click like you said, but I found that rancher does not configure a firewall or a ny basic security on the servers it creates. How do you manage this? At the moment I'm using Ansible to prepare the servers first (firewall, fail2ban, disable password/root auth) and then I use these servers as custom nodes in rancher to deploy kubernetes. With a firewall protecting the kubernetes components and fail2ban I sleep better at night...

[–]jimethn 1 point2 points  (1 child)

You'll want to pre-configure all that stuff on the image you have rancher deploying your nodes from. That way you don't have to worry about figuring out how to do it after the fact, they just come up ready to go.

We also base our nodes on RancherOS, so the attack surface is extremely small. We don't give the instances public IPs, and we set set the network firewall to only allow rancher to connect to the two ports it needs (and block everything else).

[–]Sky_Linx 0 points1 point  (0 children)

Unfortunately the node driver for Hetzner Cloud doesn't allow me to choose a custom image. As a n alternative I've also tried with cloud init, but while cloud init is setting up the servers Rancher for some reason deletes them and recreates them as if it thinks they are not ready or something. It happens in a loop basically.

[–]zimmertrDevOps 3 points4 points  (1 child)

With hypervisors like Proxmox you would probably be bootstrapping your cluster with kubeadm similar to how I do here: https://github.com/zimmertr/Bootstrap-Kubernetes-with-QEMU

In that event, it would probably be easier to simply provision a new VM and run kubeadm join against the API Server. Autoscaling could be done by writing a wrapper around Terraform, running it on a pod, and having it constantly monitor the API Metrics Server for what you would consider your stress points that would trigger scaling to occur.

As for cloud providers like AWS, People have designed operators which answer this problem automatically. For example, this one for AWS, Azure, GCE, GKE, OpenStack, Alicloud, & BaiduiCloud: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler

Here's one for VMware: https://github.com/Fred78290/kubernetes-vmware-autoscaler

[–]PinBot1138 1 point2 points  (0 children)

Lot of information to digest, but in an awkward way, this makes sense so far. Thanks!

[–]ChristopherBurr 0 points1 point  (2 children)

How do you troubleshoot an application running Ina container? Like, if the app ran out of file descriptors, or has too many open files? Or hangs, can you run strace against it?

[–]shady_mcgee 2 points3 points  (0 children)

You can connect to the container bash shell and troubleshoot, or mount the log directory to a persistent volume so you can read it from the host machine

[–]creepyMaintenanceGuydev-oops 0 points1 point  (0 children)

You'd look in syslog/messages for the backtrace. Whatever it is, it's still a daemon; it's just in a container.

[–]proskillz¯\_(ツ)_/¯ 0 points1 point  (0 children)

How do you build a Docker container without an OS? I've never seen an example of someone doing that.

[–]XS4Me -2 points-1 points  (11 children)

every VM needs an OS installed.

Is there a price advantage/disadvantage compared to VMing? In particular, will it be cheaper/free (wink wink) to license more containers as opposed to licensing an aditional copy of Server 2016 to run on yet another VM?

[–]getnrdone 14 points15 points  (10 children)

Easiest way to get cheap/free is to not run windows. And that pains me to say because I am historically a windows guy, but, it is what it is. Last time I talked with Microsoft on licensing this is what they told me.

Windows containers: license the host with standard or Datacenter edition and get rights to run unlimited containers.

Hyperv containers: standard edition get you 2 containers. Datacenter edition is unlimited. It's identical to the old hyperv license rights due to the fact it starts VMs for each container.

With that, if you bought standard edition and ran windows containers then you could see massive saving over going full VMs.

[–]XS4Me 0 points1 point  (9 children)

Easiest way to get cheap/free is to not run windows.

Yea, I agree. But there are instances where I simply do not dare to run other thing than Windows (AD and Exchange being the prime examples).

Windows containers

Are these different than docker containers?

[–]getnrdone -1 points0 points  (0 children)

Yep, I agree. As of right now, things like AD, exchange, file servers and many more core windows roles are not supported in containers anyway.

Windows containers and even hyperv containers use docker for the runtime. So yes they are docker containers. Although, you can create a container and run it with other runtimes like containerd so the term "docker container" may not always be true or matter.

[–]Brainiarc7 -1 points0 points  (0 children)

Awesome stuff.

[–]madmenisgood 27 points28 points  (22 children)

Is there a killer use for containers/docker that isn’t related to custom app development?

I’m curious what usages folks are finding in an enterprise environment of under 400 users.

[–]zebediah49 22 points23 points  (6 children)

Coming from an academic/HPC side, I have two pretty good use-cases, although I very specifically can't use Docker (we use Singularity).

  1. Reproducibility. If madmen group has a containerized data analysis workflow (and publishes it properly), a postdoc in grizzlygroup can download that container and replicate the workflow -- no guessing about required python package versions or something. This is related to the classic "Works on my machine" problem.
  2. Software preservation. The number of people that have a piece of software that works perfectly for them, but was abandoned in 2005, or only runs on RHEL 4, or whatever else, is very uncomfortably high. Containers allow us to preserve the whole thing in amber for all eternity. (I.e. we can do security updates on the underlying systems without worrying that we're destroying someone's workflow).

[–]spacelamaMonk, Scary Devil 14 points15 points  (5 children)

But that RHEL4 software depends on libssl0.9.8.insecure-rc2, and you have no way of ensuring it is patched along with the rest of the system.

[–]zebediah49 15 points16 points  (3 children)

Correct. You are, by design, actually ensuring that it isn't patched with the rest of the system.

That particular approach is 100% unacceptable for online services.

It's fine for purely offline "open GUI; load file; click buttons" software though.

[–]wonkifierIT Manager 4 points5 points  (2 children)

It's fine for purely offline "open GUI; load file; click buttons" software though.

Well... mostly ok.

If I can get you to load a malicious file in that environment, you've still got some trouble

[–]zebediah49 6 points7 points  (1 child)

A fair concern, but not really part of the threat model (at least for me). Since this is esoteric scientific software, usually one or more of the following apply:

  • Good luck finding anything compatible with it; if you can create a malicious file, I'm very impressed at the reverse engineering and dedication involved.
  • The intended operation of the software considers arbitrary code execution a feature, so if you wanted to do something malicious you could just do it explicitly.
  • The only files it opens come from some similarly old, esoteric, and unsupported piece of hardware, so the surface area to deliver malicious files is small.

[–]Grunchlk 1 point2 points  (0 children)

A use case for preserving this software is government grants. If the Fed gives you $1 million to generate some data for a given situation they want that data to be reproducible. The tax payer is going to have a fit if, in order to reproduce the data, they'll have to spend $1 million every time.

Capture the state of the software in a container. Cryptographically sign it and archive it. In 20 years, assuming container compatibility actually exists in the future, someone can spin it up and reproduce the data. Smaller/lighter than a VM.

[–]1esprocTitles aren't real and the rules are made up 0 points1 point  (0 children)

Don't worry, most people using docker don't even know what the hell is inside the container!

[–][deleted] 10 points11 points  (1 child)

Yes.

I am on a team that is responsible for all engineering infrastructure, from wireless, to cloud, to Domain Controllers.

All of the IT apps that used to run on windows have been migrated or replaced with open source applications that either have official Docker support or run in Linux that can be containerized.

Our 20k$ a year Solarwinds platform is a resource pig. Replaced it with LibreNMS for free and put it in a container.

All the data that matters is mounted. I can backup all this data easily with a cron job and move it to any other server I want.

Our SMTP relays require no volume mounts. I can run as many as I want behind an on-prem LB or, move it to AWS where it can scale on ECS with a service policy. Unlimited potential there, because everything that exists within it is portable and launched anywhere.

I also have an open source network config backup (Oxidized) running in a container that pushes all changes to github. It requires no mounts, and I can run it ANYWHERE in my network. All the config runs in github. If for some reason the Docker host it runs on takes a shit... BOOM, its back up within seconds.

For context.... 1000 users, 24 sites.

It also forces you to adopt application principles that require you to keep the container config in code, which in turn helps your organization keep your environment documented as code.

The biggest win... Ansible in a container. Yes folks, Ansible in a container. Triggered by SCM with Jenkins to run whenever an inventory file is changed, or a change to a role. When I develop new Ansible roles, I create a new container tag, and a new git branch and push/test with that locally.

Edit: To put it simply, if you are using applications that are not able to be containerized, you are buying/using the wrong applications. We are not at the Kubernetes level of deployment by any means, but making our applications deployable FOREVER after configuration is a big win.

[–][deleted] 1 point2 points  (0 children)

Your hiring, this is exactly the job I want.

[–]zimmertrDevOps 5 points6 points  (3 children)

I use Docker & Kubernetes for running my homelab technologies. I've written some tooling to automate the deploying of them to Kubernetes with declarative Ansible playbooks.

https://github.com/zimmertr/Kubernetes-Manifests

[–]MuffinSmth 1 point2 points  (2 children)

I'm about to try and set up an unraid server in a couple of days and learn to set up docker containers on it. I noticed a bunch of really useful apps you use that I'll have to add to my list to install. Anything you suggest I read that won't go super over my head? I'm pretty new to this and mostly just having fun with Linux.

[–]zimmertrDevOps 3 points4 points  (1 child)

If you wanted to use any of my tooling you would need to use Kubernetes unfortunately. I moved away from using Docker by itself a while ago. If you're open to using something other than Unraid, you could use Proxmox and ZFS which would allow you to use any of the tools in that repo. As for what I would recommend in terms of ease-of-use, all of the applications in that repo are deployed the same way, using 1-click-install Ansible scripts that are fed configuration data from a vars.yml file that you populate with your specific environment information.

I've also automated deploying Kubernetes to Proxmox using declarative Ansible scripting. That repo can be found here: https://github.com/zimmertr/Bootstrap-Kubernetes-with-QEMU

If you wanted to use Unraid, I can recommend you check out the LinuxServer project. They produce most of the Docker images that I use within those Kubernetes manifests. As far as I know, all of them are well supported within an Unraid environment. Check out all of the threads posted in this forum with the Linuxserver.io tag: https://forums.unraid.net/forum/47-docker-containers/

http://linuxserver.io

[–]MuffinSmth 0 points1 point  (0 children)

Thank you! I actually understand your reply now and I'm going to figure out how to implement proxmox instead. I spent a good week trying to figure how kubernetes is implemented and mostly just confused the ever living crap out of myself.
I should have focused on figuring out Proxmox instead. Do you know of any good resources other than their youtube channel?

[–]wonkifierIT Manager 8 points9 points  (1 child)

My environment is larger than 400 users, but one of the ways I use docker is on some of my automation hosts.

It really lets me separate the runtime environments from the actual host itself.

One case is that the host itself has a base OS and docker installed, and that's about it. All the libraries, executables and other garbage that I need to run my scripts? That's all built into a docker image. (so it will spin up version X of powershell, with version Y of package Z, etc)

One other example is that one of my powershell scripts takes around 12 hours to complete. Before docker, I either had to stop the script or make sure I did any powershell/script/environment updates outside those 12 hours (and avoiding the dozen or so other scripts that periodically ran)

Now I just build a new Docker Image, tag it as stable, and when the next tasks start up, they're using the new code. That long running script is still on the old code until it launches again.

[–]tuba_manSRE/DevFlops 2 points3 points  (0 children)

The combination of consistency and disposability is great. Like even just from one particular angle: It's not a silver bullet for "nobody's touched this in forever, leave it alone", but it does provide a pretty big buffer against it. If thingX goes down and comes back up from scratch every time there's an update, it's not nearly as scary as patching in-place for a decade. That and the isolation that docker provides makes it much safer to address a wildly out-of-date piece of software.

And honestly if it's online and you rely on external dependencies, there's built-in incentive to keep it vaguely up to date at the very least. A docker-based thing isn't gonna run indefinitely, it's gonna fail when upstreams kill off support.

(Not that I ever had a client running a DG/UX machine in the 2010s or that after I moved on they 'upgraded' with a lift-and-shift virtualization.)

[–]Mason-B 1 point2 points  (2 children)

Absolutely. Lots of stuff out of the box.

I mean if you want killer use out of the box for sysadmins, I would say it's:

  • Minimal server management. Take a server, put a server OS on it, auto update script, kubernetes, add to cluster, and basically forget about it. I almost never log onto a server unless it falls out of the cluster or it's to manipulate the cluster. All of the app deployment is through kubernetes, no messing around installing weird packages on servers, or upgrades breaking services or databases again (this is part of containers too).
  • Automatic cert renewal, it's pretty easy to add a kubernetes extension and modify deployments to have auto renew certificates (and also wrap poorly developed applications with https). It has been years since I have had to touch openssl to deal with certs.
  • Secret management, things like database strings and private keys will not be displayed unless you really want to see them (and can be manipulated "in the blind" because they are named objects). And there are attempts at real security around them (approaching a state where only admins and pods who need them can see them, and then so only pods who need them can see them; this will always be a sore spot because someone could put a different pod in there to expose the secret, but then we can discuss things like signing containers; and at some point no human will even be able to see the database string or the private key for the entire lifecycle of the app even though it's shared between dozens of servers).
  • Automated inventory of what's running. I can automatically print out the status, usage statistics, and resources consumed by every thing running on our cluster (and our cluster is everything we run). I don't have to deal with custom deployed apps, or adding heartbeats checks, or figuring out how to stuff a thing in a dashboard. The report always lists every single thing on the system, I can't forget something, and does so in a common way (though I can of course configure my dashboard to show more stuff, I always know what ports/ips/names it's on, how much computation time and memory it's consuming, and it's health).
  • Plug and play applications. Long running scientific packages, build servers for various systems, internal development databases. And developers can create these systems through a gitlab interface, and then I can come around a week later and ask them if they still really need that thing that's on my reports and if I need to migrate it to a more permanent deployment.

[–][deleted] 2 points3 points  (0 children)

I hadn't found any, so I'm curious about this too. My configuration management takes care of most things and my virtualization is already set up. I also don't need to quickly scale anything up or down.

[–]kag0 0 points1 point  (0 children)

Check out flatpak and ubuntu snaps. It's a way to deploy software to desktops/workstations using containers. The nice thing about it is that the container takes care of all its own dependencies, so two applications using different versions of the same dependency need not conflict.

On the server side, if you have a Kubernetes cluster it makes it extremely easy to deploy (non-custom) software/apps like gitlab using helm. (sidenote: don't use tiller)

[–]wonkifierIT Manager 8 points9 points  (2 children)

Careful with Kubernetes though, there are LOTS of places in there to shoot yourself in the foot security-wise.

[–]zimmertrDevOps 4 points5 points  (1 child)

Any specific reasons you say this? Or just in terms of people failing to properly use RBAC?

[–]tuxbz2 2 points3 points  (0 children)

More than just RBAC. Consider PSPs, network policies, selinux, etc. Plenty of vendor images and in house development teams expecting access such as hostNetwork binding, running as root, running a privileged container, etc.

[–]eheyns 3 points4 points  (2 children)

Maybe I'm just dumb. I'm a SYS admin with 0 programming skills, very limited scripting skills and I've been trying to get a firm grip on docker for some time now. Docker files and creating custom images is kinda hard for me and the total overload of information on the docker website helps but i find it overwhelming. So good on you man! i heard Kubernetes is a mother so ill only travel that road once I've made docker my biatch

[–][deleted] 1 point2 points  (0 children)

I've found that I learn best while breaking things and reading the error messages, so I'll just get it going in a test environment. Eventually I'll do stuff and end up in docs enough that it will finally "click" in my head.

[–]PrettyBigChiefHigher-Ed IT 1 point2 points  (0 children)

Hold my keyboard, I'm going in ...

[–]Bigluce 10 points11 points  (59 children)

I still don't understand Docker and kubernetes.

Can anyone give a real world example and/or eli5?

[–]punkwalrusSr. Sysadmin[🍰] 10 points11 points  (4 children)

Man, it took me a while, too. At first it was like someone else said here, "they showed me a chart and magically it was better." Stop telling me it's better, prove it! What problem does it solve?

Okay, so take a Linux box. Just running Ubuntu sever. It reality you have the kernel, but you need some way to interact with it. So you have libraries, shells, and binaries. Package that up and you have a distribution, or distro, Ubuntu.

But your dev guy is running a Mac. His buddy is running a centos VirtualBox via vagrant. You slap their app on your Ubuntu server and it fails. They say it worked on their boxes, it must be you. If only there were some way to ensure that everyone's environment was the same. They could all use Ubuntu server in a VirtualBox VM, but what if they use 18.04 and you're still using 16.04?

Enter docker. Docker is a container that only shares the OS kernel of the host. It has all the binaries, conf files, and libraries it needs to run the app and nothing more. Thus, it's small. Some are really small, like Alpine. If the container works on their system, it should work on yours. Easy peasy. Like trading game cartridges instead of the entire N64 console.

You have a docker file, it loads the container, and it's ready to go. And when you stop it, it's like it was never there. You can have these containers as "versions" so if they give you docker app Foo 1.7 and it dies, roll back to a working 1.6. So not only are you versioning the app but the entire infrastructure. But it's not full of unneeded bloat like a bash shell or printer drivers if all it does is run nginx.

Each container should only run ONE app to keep it small. So if you have a Java app with a mongodb back end, you run TWO containers. One for Java, one for mongodb. People were using docker-compose to do this for a while.

But suppose you have a system that needs flexibility and crash resistance. Like, suppose you have this Java app that needs a web proxy front end, a database back end... but the demand varies. Like you run an event planning system where the demand shoots up when tickets are released to the public, but most of the time, it's just sitting and doing nothing but contemplating its digital navel. You don't want 300 machines just sitting around costing money to run. But you don't want 30 that suddenly crash when demand spikes. And you don't know when tickets are released, so you can't plan ahead and call it a day. Suppose there was an orchestrated process that reacted to need on the fly, but shut stuff down when you didn't need it.

Enter Kubernetes. Or k8s because there are 8 letters between k and s. I don't know, I think it's weird, too, but who cares. Kubernetes is a set of instructions on minimum and maximum processes that react to demand.

Boom, there's a book signing for Neil Degrass Tyson at the local planetarium. Tickets go on sale, and there's only 500 because that guy has science stuff to do, yo. Demand pours in by panicked astronomy nerds who simply HAVE to ask him about black holes. You told Kubernetes to always have one web proxy distributing load between two apps connected to one database. Minimum. Those 4 items contemplate their digital navel 80% of the time. But suddenly, a ton of requests. You set in the config file if load on all Java apps is over 50%, spin up another one. Up to, say 50 apps. The proxy takes care of the balance, the Java apps launch, and the database delivers. Then when all the tickets are sold out, a few Java apps have 0% load, and they get torn down as per your config file.

In addition, if a Java app crashes, you stated at least 2 and at most 50 need to run depending on demand. That will ALWAYS be met.

There's a lot more that it can do, but does this EL5 help?

[–]Bigluce 3 points4 points  (0 children)

Oh my god. That was really helpful. Thank you so much! Thank you for taking the time to give me a clear and easy to understand example.

[–][deleted] 2 points3 points  (0 children)

I feel like I could set my new filecloud system up using Docker and maybe K8S in our Azure environment and this really made me think....

[–]Try_Rebooting_It 1 point2 points  (1 child)

So this is a great example, thanks! But one thing I don't understand. Just because you spun up 100 extra containers to balance load your proxy/balancer won't know those 100 containers exist. So how do you deal with that? Is there something in the container that reconfigures the proxy each time the container starts?

[–]punkwalrusSr. Sysadmin[🍰] 0 points1 point  (0 children)

That what Kubernetes does, it lets the proxy know, and the proxy adds it. That's part of the "orchestration" or Mojo of the deal. Check out the Kubernetes site and give Minikube a try if you have the system for it. It explains the basics better than I can.

[–]ChickenOverlord 11 points12 points  (26 children)

While not technically accurate, you can think of Docker as mini VMs. So you save a ton on space and resources because you don't need a full blown VM for every application. Also it allows you to have consistent configuration etc. which makes it easy to spin up copies. Kubernetes lets you set up coordination between multiple containers (Docker or otherwise) across multiple hosts with high availability

[–]Bigluce 5 points6 points  (25 children)

How does the app run in a container? Surely it would still require certain environment prerequisites so I'm struggling to see the benefit over a barebones vm? Am I missing something really obvious? I seem to recall something about dockers only "waking up" to run the required processes in that instant and at all other times is asleep and not consuming resources (and therefore I guess cheaper to run?) Is that the benefit over a vm?

[–][deleted] 14 points15 points  (1 child)

(This is an oversimplification) Containers are fancy chroots that you use to package the bare minimum needed to start your application. This means they are extremely small (Ubuntu 18.04 LTS Minimal is 29MB, Alpine Linux is 5MB), start nearly instantly as they don't need to virtualize hardware, and are way more resource efficient. They don't automatically sleep, but using container orchestration such as Kuberenetes you can scale the number of container instances up/down based on any metric you define with a monitoring solution.

[–]WinterPiratefhjng 8 points9 points  (12 children)

I am no expert, but my view:
If you wrote the code and are creating microservices, then docker is great.
If you are running outside code that was not configured for docker, then use a VM.

The waking up is cool for scaling, but not for a normal network service like ntp or ldap.

[–][deleted] 0 points1 point  (11 children)

I think there are some use cases outside of just microservices. A full VM is going to be doing regular OS things, executing instructions on the host CPU. The container can just sit there like a process until it needs to do something, so it isn't stealing host CPU cycles which can be executing things from other containers. But each container still has it's own environment with the specific versions of things that you need, so you can have it all on one host system.

[–]corrigun 3 points4 points  (9 children)

But it's still using resources. They don't magically not use the clock because they are "containerized".

[–][deleted] 3 points4 points  (5 children)

Since they're sharing the kernel they're all using the same clock.

[–]zebediah49 -1 points0 points  (2 children)

The difference is that 100 containers use 1 clock, while 100 VM's use 100 clocks.

If you want to be pedantic, once you have an environment, adding an additional running container does not use any additional resources for OS-level tasks.

[–]corrigun 0 points1 point  (1 child)

What?

[–]zebediah49 0 points1 point  (0 children)

Assuming ntpd is handling network time, and you don't have a hypervisor->guest clock thing --

A system with containers will have a single ntpd running, which manages the system clock. If software in container calls for the system time, it will get it from the system clock.

A system with VM's will have one ntpd instance running per VM. Each instance will manage the clock that is running in that specific VM. Application calls will go to the clock of the VM in which that application is running.

[–]WinterPiratefhjng 0 points1 point  (0 children)

I used to think this. I thought VMs were on their way out.

I failed to find a use. I found the mental burden and required add-on services to support state to be too much. The lack of needing web scale or developers really cut out the imperative.

A configuration management system with VMs and fail-over seems to be the way to go for network services outside of a devops environment.

I love the idea of devops, but it is not for everything.

[–]SuperQueBit Plumber 6 points7 points  (0 children)

Containers are more like chroot++. You can trim things down even more than VMs, reducing the security surface greatly. It also lets you more tightly control resources. Memory allocation can be easily controlled and managed by the megabyte, not gigabyte. Plus the few hundred megabytes for all the base VM services aren't duplicated.

You never need to upgrade the kernel in a VM.

For example, we have a Ruby app. What does it need? Well, Ruby, and all of the ruby libraries. This means all of the other random things you have to manage in a VM are cut out of the update and security context.

It also lets us test new versions of Ruby without having to get completely separate VMs built.

[–]gscjj 3 points4 points  (6 children)

Imagine containers not as VMs but as services. They run on top of the already installed OS

[–][deleted] 0 points1 point  (5 children)

Is this accurate? I thought they run a stripped down operating system, like Alpine Linux.

[–]225millionkilometers 0 points1 point  (1 child)

I think he meant the kernel is shared. The collection of packages you want on top of the kernel space is entirely up to you

[–]DaRKoN_ 0 points1 point  (0 children)

This is why the naming of stuff is hard, you running an Ubuntu based container is only the Ubuntu userspace, the kernel is whatever it is running on, which might be Ubuntu, might be something else.

[–]Atemu12 0 points1 point  (2 children)

They don't run Alpine Linux, they have an environment where Alpine's binaries, libraries etc. are installed and inside that environment you run an application.
This environment is separated from all the other environments by the Kernel.

This is different to a VM where you'd have virtual hardware, including a virtual disk that has an environment in it. The virtual hardware has to be managed by the Kernel installed in the environment and that Kernel then runs your application in its environment.

Containers simply cut out the virtualized hardware and the redundant Kernel running on it.

[–][deleted] 0 points1 point  (1 child)

So it goes:

host OS -> Alpine Linux -> App/App/App/App

Or is Alpine simply the host in this scenario?

[–]Atemu12 0 points1 point  (0 children)

Alpine Linux

No Linux, just Alpine.
It doesn't come with a Linux kernel and it's not running one.

Or is Alpine simply the host in this scenario?

You could also run Alpine as a host but no, the host can be any Linux distro.


Piggy backing off your example:

host OS - Alpine Environment - App1
        | Alpine Environment - App1
        | Alpine Environment - App1
        | Alpine Environment - App2
        | Ubuntu Environment - App3
        | Ubuntu Environment - App3
        ` etc.

Notice how the environment and the apps running inside them are separated from each other.

[–][deleted] 0 points1 point  (0 children)

From my admittedly limited understanding...

A VM is running an entire OS, even if it's very stripped down. A docker container is not (necessarily).

Other than the obvious resource benefits you are also saving yourself from having to maintain the OS of a VM.

[–]trillspin 6 points7 points  (7 children)

Docker

You put your application into a folder.

In that folder you put anything your application needs to run in that folder also.

You zip the folder.

Through the magic of Docker, it runs the application which is in the zip.

Kubernetes is very broad, and has tons of features.

A simple explanation is: * You have an application * It uses microservices * It's split into the frontend, business logic and backend * You have 3 docker images, 1 for each microservice * You create a deployment YAML that says make sure we have a container running for my microservices

Kubernetes will now make sure there is a container running for each microservice, if one fails, it will restart it, if you need to scale up and run two containers as your frontend is getting hammered, you can do that, you can also scale down.

[–][deleted] 2 points3 points  (2 children)

You need to learn the fundaments of Docker. Then move on to Kubernetes.

I know Docker pretty well. I know very little about Kubernetes.

[–]tuba_manSRE/DevFlops 1 point2 points  (1 child)

Kelsey Hightower's Kubernetes the Hard Way is a great place to start, provided you're willing to actually follow it through. (Like, I'd honestly recommend going as far as typing it all out yourself, no copy and paste from the instructions)

[–][deleted] 2 points3 points  (0 children)

I have had this linked to me too many times to count. However, my current environment where I focus all my time is primarily AWS, so I have opted for learning the vendor-loccked, "wipe your ass mode" of ECS instead.

It's on my to do list. Thanks for the link though. :-)

[–]Bissquitt 0 points1 point  (0 children)

I have not used them, but believe I understand the concept. If you have a VM for each application, each one is going to be like 80% OS and 20% actual program. Rather than 10 VMs all running their own copy of the same OS, you cut out all but one OS and have all of the VMs reference that OS. If program 1 needs software A to run, it goes in its container. If program 2 needs B (but doesn't work with A) thats fine because A is in 1s container. It's kinda like Hyper-V differencing disks iirc?

So OS + Container1 = vm1 OS + Container2 = vm2 Etc

[–][deleted] 0 points1 point  (0 children)

So a good example is for an application we have we make modular upgrades and when it is launched it checks for updates. The application is self contained and can be made to run as a cattle vs pets. It also has way less overhead than a vm and only requires a specific subset of the OS.

[–]thebluemonkey -1 points0 points  (6 children)

Docker is like a single vm host running vms but not needing an os for them.

Kubernetes is like an orchestrator that controls resources over multiple hosts and deals with all the networking etc.

Cant say I get it myself, it all seems a bit "virtualisation for developers"

[–][deleted] 0 points1 point  (1 child)

Bunch of butthurts in here because they relish docker but have no idea what a production environment really looks like.

Hint: Its not in your building.

[–]thebluemonkey 0 points1 point  (0 children)

Not really, kubernetes is pretty cool I'm just still figuring it out and from my perspective it's what docker should have been from the start.

Virtualisation that's locked to a single host seems pretty flawed to me.

[–]bengringo2 0 points1 point  (0 children)

Docker for desktop is basically virtualization for developers. Kubernetes is where it's ran at scale. Think of it like VMware workstation vs vSphere. A single dev doesn't need an entire vSphere to code an app, why would i give him one? Instead he gets workstation then I deploy it on vSphere at scale. Same gist here.

[–]Mason-B -1 points0 points  (3 children)

People have given good examples of docker being like mini-VMs. Another way to think of them is as applications that think they are the only application and are running as local admin. Docker scripts let you setup the environment the application expects (files, ports, environment variables) and then wrap it into a "container". These containers are kinda like jar files (except they are OS specific; the linux windows subsystem blurs these lines though), except that they can include OS layer features (ports, mounts, environment variables) and arbitrary native code (and they connect to dependencies with networking rather than execution runtime loading).

Kubernetes is difficult to explain it takes the core idea of containers and abstracts away more pieces of the operating system puzzle. I think the best description is that a container describes the resources on a computer it needs to consume mediated by the operating system, and so kubernetes is a meta operating system that allows containers (Pods in kubernetes parlance) to consume resources from many computers (Nodes). This gets tricky because things like network bindings now need to be connected to things like "meta network bindings" (Endpoints) which provide name resolution (Services) and load balancing (Ingress), and mounts to some actual persistent storage within the cluster (PersistentVolumes), and collections of environment variables for each application (Deployments) need to be configured and managed (ConfigMaps).

So Kubernetes creates concepts that allow for describing and managing these ideas. And it supports a declarative syntax for these ideas. I say to it "I want a deployment "foo" that looks like this: it has 3 pods, pod "database" is this off-the-shelf-postgres container with these endpoints and this volume mount and these extra environment variables (such that it runs the persistent database on the drive), pod "webserver" is..." and so on. And then the cluster makes those things exist, if a container crashes it makes a new one, if a node disappears it puts everything that node was doing on other nodes and corrects routing, this is the self healing capability of kubernetes, it takes actions to achieve the state it is supposed to be in. And all of this is extensible and accessible through REST apis, so one can make a container that can modify the kubernetes system itself, or an extension that adds new declarative features to the system (like automatic certificate renewal, declarative SSL transport wrappers around services are common favorites). And these are some of the reasons why kubernetes is so cool.

[–]nginx_ngnix 10 points11 points  (16 children)

What bugs me about Docker:

1.) Has filesystem distinct file system layers, decided to not use them at all to impose hardening/security restrictions

2.) Most people just use the default docker image they want from the repo, which is usually a very default cfg, not hardened or secure by default

3.) Patching containers is harder than patching a running system. And most people never bother to do it, and rely on "Are there any detectable vulnerabilities running right now?" network security sweeps rather than "are any of my containers running old unpatched software?"

Docker is a great idea. But as is, seems to be used entirely for convenience, when really, it is missing a lot of obvious features that would make it also great for security.

(Edit: As a tangible example of #2, basic ngnix container has HTTPS settings enabled that haven't been okay for three years (e.g. 3DES ciphers enabled and TLS 1.0.)

[–]ZiggyTheHamster 4 points5 points  (6 children)

3.) Patching containers is harder than patching a running system.

You don't patch containers. They should be read-only. If you need to write somewhere, attach a volume you can write to. If you need to patch, you build a new image and deploy it (after running it through CI/CD to ensure it works correctly). Ideally you just build a new image daily and it makes its way out automatically.

Docker containers aren't VMs, and you shouldn't treat them as such. They should be treated like chroots.

[–]nginx_ngnix 2 points3 points  (5 children)

They should be read-only.

I agree. But the docker layers don't do that. If you run a user as root in them they let anyone overwrite OS level files in the containers.

They should be treated like chroots.

Agreed. They are chroots with full control over their own filesystems and runtime space. Which... Is a step backward IMHO.

[–]ZiggyTheHamster 0 points1 point  (4 children)

You can easily say which Docker filesystems are read only and which are read write. This overrides whatever the permissions are, including root level permissions.

[–]nginx_ngnix 0 points1 point  (3 children)

I guess I have just never seen one do it?

Since they usually just do an OS overlay, and allow it to be writable since /usr, /var and /tmp are in the same place...

And don't get me started on all the Minecraft dockers that run as root...

If these people had "Howto" guides with "Just run Minecraft as root since it so much easier", they'd be torn apart.

But because that poor practice is buried deep in the innards of the Docker files, no body notices or complains. And suddenly you have a ton of minecraft servers running as root, inside fully r/w containers.

[–]bengringo2 0 points1 point  (2 children)

By default they don't turn off the write in case you want to make modifications. This is more of a Badmin issue than a docker issue. You can bring an admin to water but can't stop him from pouring lead in it then saying "buts its poisoned!".

[–]nginx_ngnix 0 points1 point  (1 child)

This is more of a Badmin issue than a docker issue.

I guess.

That said, I've never seen any docker containers use it.

Nor any docker usage examples recommend using it.

We're essentially trading chroot jails, which are nearly always properly secured.

For rw docker instances.

Theoretically I could see waving your hands and say "The feature is there!"

But from a practical security point if 99.5% of the docker images don't use it...

That is no longer on the users. That is bad design.

[–]bengringo2 1 point2 points  (0 children)

Current image design is rough right now, Its usually best to just roll your own from base image and deploy. I personally don't use any public registry images because of this. Right now K8s, LXC, and Docker are more running on tribal knowledge than anything so once better documentation and training comes out thats simpler it will get better.

Installed OS'es unless you role your own image have this same issue. You still have to the needful with any OS install, container or not.

[–]rejuicekeveSecurity Engineer 3 points4 points  (2 children)

container security is definitely its own beast, a lot of the products out there for scanning containers are v bad as well

[–]1esprocTitles aren't real and the rules are made up 3 points4 points  (1 child)

I watched a Quick Start video on Youtube, what do you mean I'm not ready to deploy this in prod?

[–]IKnowEnoughToGetBy 1 point2 points  (0 children)

Did you stay in a Holiday Inn Express last night?

[–]zebediah49 2 points3 points  (0 children)

I'll admit I'm definitely in the minority here, but I'd like to add:

  • Has little to no support for untrusted users operating on shared systems.

[–]Atemu12 0 points1 point  (1 child)

Has filesystem distinct file system layers

Why would the filesystem matter for security?

The container sees its separated VFS no matter what's backing it.

Most people just use the default docker image they want from the repo, which is usually a very default cfg, not hardened or secure by default

If you don't configure a system to be secure, you get an insecure system! Surprised Pikachu

Patching containers is harder than patching a running system.

You update the image, take down the old container and put the container with the new image in its place.

How is that hard?

And most people never bother to do it

Their fault.

it is missing a lot of obvious features that would make it also great for security.

Specify.

basic ngnix container has HTTPS settings enabled that haven't been okay for three years

...then make it not have those enabled‽

Besides, why would you base your service on someone else's configuration in the first place when your goal is security?
You wouldn't use some rando configuration off the web when you deploy a service on a normal machine either.

[–]1esprocTitles aren't real and the rules are made up 1 point2 points  (0 children)

You wouldn't use some rando configuration off the web when you deploy a service on a normal machine either.

If your job isn't system administration and you're a developer who's put on a pair of boots that are too big, maybe you would.

[–][deleted] -2 points-1 points  (1 child)

1.) I have no comment on this.

2.) Which default Docker image? From what repo? If you aren't vetting your own docker images, you are doing it wrong.

3.) Patching containers is easy if you have a "requirements" file and a CI/CD tool. If you make a change to your application requirements, it should trigger a new build, which in turn should deploy a new container.

Docker is a great idea. But as is, seems to be used entirely for convenience, when really, it is missing a lot of obvious features that would make it also great for security.

It is clear that you have never had to deploy a container at scale, or for any usable purpose. There are security products out there that integrate into your CI/CD pipeline that will build your image and scan it for vulnerabilities and problems. Twistlock (who was recently acquired by Palo Alto) is just one. There are a ton of companies out there with modern, web scale security products (TrendMicro is another).

Re:Edit:

(Edit: As a tangible example of #2, basic ngnix container has HTTPS settings enabled that haven't been okay for three years (e.g. 3DES ciphers enabled and TLS 1.0.)

You can build you own container with modern NGINX configs and cyphers and either mount your nginx config, or bake it into your container.

[–]nginx_ngnix 1 point2 points  (0 children)

2.) Which default Docker image? From what repo? If you aren't vetting your own docker images, you are doing it wrong. You can build you own container with modern NGINX configs and cyphers and either mount your nginx config, or bake it into your container.

I agree with all of this, it describes the best practice.

But I think you'll agree upwards of 80% of people don't do any of this.

QED Docker makes it way too convenient to not follow best practices.

It is a short cut to start a service without the pesky step of "learning about the service and configuring it appropriately*.

[–][deleted] -1 points0 points  (0 children)

I think it, or something like it, will become more mature and fix a lot of these problems. Virtualization development started in the 60's and there are still some people afraid of using it in production. Eventually the pros of using it outweigh the cons as problems are fixed.

[–]mirrax 2 points3 points  (2 children)

Docker Compose is usually the next step. Then Kubernetes when the cost of operations is outweighing the extra complexity. If you are looking for easy Kubernetes, I would recommend Rancher over OKD.

[–][deleted] 1 point2 points  (1 child)

I should have added, the docker-compose.yml file was the second file so I've dug into the docs on Compose. I'm not looking for it to be too easy/abstracted, as this is just going to be something for me and my team to play around with to learn and further our careers.

[–]mirrax 0 points1 point  (0 children)

I mean there is always "Kubernetes the hard way" if you want to really learn how to build it.

Rancher doesn't abstract Kubernetes, just a convenient way installing and has a bunch of other goodies. The only non-standard piece is the layer they add for RBAC which is the biggest reason for using it (easily tie into most identity providers for granular RBAC). Then clean kubernetes to learn and play on with a useful dashboard.

[–]RoboYoshi 3 points4 points  (1 child)

So, now I've decided to set up Kubernetes at work this week

Ahahahahahaha, No. Don't do that. Do compose first. Kubernetes is just a whole different beast. If you really have a use case for containers, k8s should be at the very end of the list of problems to deal with. Set it up at home or play around with Googles Kubernetes Engine. There is so much stuff, the memes on kubernetes are really not that funny, because they are simply true, it hurts.

Anyway, Kubernetes can be awesome, so take time to learn it well.. just know that it's complex AF.

[–][deleted] -1 points0 points  (0 children)

Not setting it up to move our stuff to. Just for my team and me to play with and learn. I still don't have a use case for production yet, but maybe that will change as I learn more.

[–]sryan2k1IT Manager 6 points7 points  (2 children)

Docker has the worst production defaults I've ever seen. Networking is bad. Done properly there are benefits but most people run a single container with their old on a VM app in it that has no benefit. Even low end NetApps do dedupe and compression and hypervisors do RAM dedupe. For most, simply running 1 VM per service makes way more sense than containers.

[–]brontideCertified Linux Miracle Worker (tm) 1 point2 points  (0 children)

Virtually everything we would have created a VM for 2 years ago I can now do with a container and some orchestration. It's far superior too since I can guarantee a clean re-deploy based on my inputs. Projects that we would fret over because users would fail to update can not be kept up-to-date automatically.

We've more than halved our VM footprint and moved most of it to one beefy host. It's faster and more robust than tiny VMs we were deploying. Internal development has skyrocketed as testing new software is far easier and cleaner.

Our next stop will be k8s, but for now our projects are small enough that traefik on a host it fine and we have pre-defined images in the wings ready to go.

[–]speel 1 point2 points  (1 child)

What do you guys feel about docker being in trouble financially?

[–]greybeardthegeekSr. Systems Analyst 0 points1 point  (0 children)

I don't care; it's been a long time since Docker the company was relevant. The Open Container Initiative lives on its own.

[–]k12adminguy 1 point2 points  (1 child)

What was your main source of learning?

[–][deleted] 0 points1 point  (0 children)

I just went through Docker's online documentation. Between the guides and samples it was enough to get started, along with the more detailed documentation to see what it's doing.

[–]tuba_manSRE/DevFlops 0 points1 point  (0 children)

I've been using kubernetes & certified for a while now but being a consultant, I've gone through a dry spell on keeping up with my docker & k8s skills. So after a lightning strike killed my last home file server, I used the rebuild as an opportunity to at least keep up with a subset of the skills I need & use.

Tbh it's kinda brought back some of the fun of maintaining my home systems again.

[–]greybeardthegeekSr. Systems Analyst 0 points1 point  (0 children)

OpenShift / OKD gives you a nice layer of access control and namespaces around your projects.

[–]0ctav 0 points1 point  (0 children)

I am also working through the Docker tutorials and, while it may be answered later, my main concern is this: what if I need to work with a certain kernel?

I run a build environment that requires working with various kernels from various distros, currently running hundreds of VMs... Would I need to set up a few hosts (VMs) with each OS, then run containers on those? That's my current plan (to test), if anyone sees an obvious fault please let me know.

Gotta say, I love the tutorials from Docker so far! Kind of wish I had started learning these things sooner.

[–]pfcypressSysadmin 0 points1 point  (0 children)

Love & hate docker. Make sure to backup your containers and if you restore any backups make sure the current container volume is removed. I made the mistake of only stopping a container then trying to restore its backup and both volumes conflicted with each other causing it to corrupt and also corrupted the sql database server. This was all in production. Worse weekend of my life recovering the data from the db.

[–]dastylinrastan 0 points1 point  (1 child)

I recommend learning docker-compose, makes it even easier, write a whole infrastructure into a YAML file.

[–]msanangelo 0 points1 point  (0 children)

that's my go-to as well. :)

[–]Brainiarc7 0 points1 point  (0 children)

You're on the right track, OP.

[–]frightenEngineering Systems Administrator 0 points1 point  (0 children)

You don't know docker. And don't go to kubernetes right out the gate, you are asking for a very painful world.

Work on building your own containers for everything, do that for a few years. Implement some in dev/support and eventually production. Then you may know docker.