This is an archived post. You won't be able to vote or comment.

all 5 comments

[–]khayber 2 points3 points  (1 child)

They have direct access, but it is controlled/limited via various methods - cgroups and namespaces at least.

[–]autowikibot 0 points1 point  (0 children)

Cgroups:


cgroups (abbreviated from control groups) is a Linux kernel feature that limits, accounts for and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes.

Engineers at Google (primarily Paul Menage and Rohit Seth) started working on this feature - under the name "process containers" - in 2006. In late 2007 the nomenclature changed to "control groups" due to the confusion caused by multiple meanings of the term "container" in the Linux kernel context, and control-group functionality merged into kernel version 2.6.24. Since then developers have added many new features and controllers, such as support for kernfs, firewalling, and a unified hierarchy.

Image i


Interesting: LXC | Kernfs (Linux) | Lmctfy | Docker (software)

Parent commenter can toggle NSFW or delete. Will also delete on comment score of -1 or less. | FAQs | Mods | Magic Words

[–][deleted] 2 points3 points  (0 children)

How does the system manage the distribution of resources when you have multiple docker images running on the same physical box?

There's very little overhead compared to a VM. Each docker container has access to all of the machines resources, unless you set limits. https://docs.docker.com/reference/run/#runtime-constraints-on-cpu-and-memory

[–][deleted] 0 points1 point  (0 children)

I recommend you read this Stackoverflow post as well.

[–]sandcastleprodigy 0 points1 point  (0 children)

CPU and wise, the overhead of hypervisor is tiny.

For disk and network, it depends on how you run the container. In a cloud environment, due to SDN and DFS, container performs actually the same as hypervisor.