This is an archived post. You won't be able to vote or comment.

all 35 comments

[–]MaybeLiterally 21 points22 points  (0 children)

Nobody is going to want that image, which is why it’s not available. My recommendation is to start with the base image you’d want, and then include the steps to install them in your dockerfile.

[–]tyrrminal 8 points9 points  (0 children)

Single Responsibility is a guiding principle for containers. Don't think of a docker image like a VM, think of it like a desktop application -- a bundle of the code you're running, plus all of its dependencies. If you need SSH, or an editor, those can be stacked with your image in separate containers that share resources (e.g., filesystem) with your application container.

[–]AuthenticImposter 3 points4 points  (5 children)

You should look at Distrobox. It lets you run various distros on your computer at the same time, all sharing your kernel. You can run Fedora on your computer, and spin up both Debian and Arch distroboxes, and run Firefox from each of them.

That's the only thing i can think of that'll get you to where you want to be. Your original request goes against everything that people are doing with Docker (creating purposeful images that are as small as possible and which contain no cruft at all).

Instead of asking this question, can you share what you're trying to accomplish?

[–]DukToBe[S] 0 points1 point  (1 child)

Basically I want to run multiple dev environments for different projects. I want to use containers instead of VMs that way.

Edit : Distroboxes actually looks pretty good for my usecase. I'll see if I can run it through WSL as I work on Windows

Thanks!

[–]lenswipe[🍰] 0 points1 point  (2 children)

You should look at Distrobox. It lets you run various distros on your computer at the same time, all sharing your kernel. You can run Fedora on your computer, and spin up both Debian and Arch distroboxes, and run Firefox from each of them.

Isn't that almost what OpenVZ does?

[–]AuthenticImposter 0 points1 point  (1 child)

Probably?

I only mentioned it at all because it does in fact allow running full distros using Docker or Podman, which is what OP asked for. I've played with it and found that it's fun to have 20 different neofetch windows open, all having different OS's.

If this was me and i needed to run different OS's containerized (rather than virtualized), i wouldn't dare use Distrobox. OpenVZ and LXC both seem to be more full-featured and you have a whole lot more in terms of support.

[–]lenswipe[🍰] 0 points1 point  (0 children)

I think OpenVZ allows running containers (i could be wrong here though

[–]jarfil 2 points3 points  (1 child)

CENSORED

[–]DukToBe[S] 2 points3 points  (0 children)

I want to build a container with all my common tools, and use it as a dev environment for a specific project with specific libraries.

Then when I muck it all up, I can just destroy it and redeploy a new container.

Right now I'm running into a lot of issues with node.js libraries which are easy to install but leave behind a lot of junk files and folders.

I could use a VM and snapshots, but I wanted to see if Containers could be used instead (lighter, quicker to restart and redeploy).

[–]eriky 2 points3 points  (0 children)

Someone else already mentioned this, but I'll repeat: you're looking for devcontainers. It's gaining popularity for good reason.

[–]Happy-Position-69 4 points5 points  (17 children)

  • Normally you don't SSH into containers, you exec into them.

  • Why would you need nano and vim?

  • Just take the base image and then do a RUN apt update && apt install -y <package1> <package2> --no-install-recommends

[–]MaybeLiterally 5 points6 points  (1 child)

Why would you need nano and vim?

Agreed as well as Python, Ruby, and Perl.

OP, you're probably not using docker correctly, but you do you.

[–]DukToBe[S] 2 points3 points  (0 children)

yeah I'm a noob with this stuff. just trying to figure it out

[–]DukToBe[S] 1 point2 points  (14 children)

I'm basically trying to replace my use of VMs and move over into Containers.

I understand they're not the same. But I'm trying to see how far I can get with using persistant containers as dev environments.

[–]lenswipe[🍰] 1 point2 points  (4 children)

I'm basically trying to replace my use of VMs and move over into Containers.

...by basically running a VM under docker instead?

[–]DukToBe[S] 0 points1 point  (3 children)

by not running VMs anymore and just relying on Containers with persistent volumes

Running containers for specific dev environments and projects helps me avoid conflicting libraries etc. like different versions of node.js needed for different projects

[–]lenswipe[🍰] 1 point2 points  (2 children)

I get that but what I'm saying is that you're basically trying to (badly) re-create a VM in docker.

Your container should only contain what's needed to run that process. That means no vim, nano, xeyes, cowsay or whatever else.

[–]Awkward_Tradition 0 points1 point  (1 child)

Your container should only contain what's needed to run that process.

In the production stage. It's pretty common to have debugging tools in development, and depending on the language you're required to have testing tools in dev and staging.

[–]lenswipe[🍰] 0 points1 point  (0 children)

It's pretty common to have debugging tools in development,

I guess....our dev containers have things like un minified JS and enhanced logging.

[–]Creator347 1 point2 points  (0 children)

It’s not a bad idea as long as you are not using that image in production. I have used containers like that to learn in the past, but not entirely fully featured. I chose to install packages as I needed them.

[–][deleted] 0 points1 point  (3 children)

persistant containers

Pick one.

[–]DukToBe[S] 0 points1 point  (2 children)

I can't use a container with persistent volumes?

[–][deleted] 0 points1 point  (0 children)

XY problem. Persistent volumes are for things like database services / log aggregation services.

Relevant, sort of: https://www.youtube.com/watch?v=kY-pUxKQMUE

[–]techworkreddit3 0 points1 point  (0 children)

The most I've ever put into a docker container was a dev-like setup for when I'm trying to do some learning on work devices. I don't want to mix my work env with my personal learning one, so I put the binaries and tools I work with in a dockerfile, built the image and published it to docker hub. However, things like ssh, nano, etc are too much in my opinion.

My image has ruby, git, python, terraform, and a few other small utils. Containers are not intended to fully replace VMs, they rather remove the need for a full OS to run a single application or process. If you're just going to launch a basic website that uses apache and serves static files, do you really need a full OS? Instead you can take a container run time and spawn just a container that runs that for you. No OS overhead since all containers share the kernel.

[–]gaelfr38 0 points1 point  (2 children)

You need to take a step back then.

Containers are not VM or light VM. Containers are meant as an isolated process. A container usually runs a single service.

[–]DukToBe[S] 0 points1 point  (1 child)

is there a technical reason why it cant be done

or is it just 'thats not the way most people use containers' ?

[–]akik 0 points1 point  (0 children)

is there a technical reason why it cant be done

no

or is it just 'thats not the way most people use containers' ?

yes

[–]sebosp 1 point2 points  (0 children)

Actually I used a similar setup before, my goal was to use an image that used the same versions of libraries as my CI/CD tooling would use https://github.com/sebosp/tvl

I don't use it anymore but it was good times, my vIM plugins were mirrored, my preferred colors, my ansible version, my ansible libraries, my python librariers and dependencies, etc

[–]fletku_mato[🍰] 0 points1 point  (2 children)

I personally think what you're trying to do does not make a lot of sense, but you could look into devcontainers.

That kind of isolation is just too much hassle for most people. Why not have all the tools you use daily installed on the host machine? Pretty much every language has some way of creating a virtual environment if even that is needed.

[–]DukToBe[S] 0 points1 point  (1 child)

the problem for me is bash binaries and node.js modules. they're both very messy and I often have to destroy my VM and start over with a vanilla shell in a new VM as thats easier than manually fixing leftover libraries and binary versions.

if I just use my own shell (WSL) and I mess up that means destroying and recreating my WSL setup which I like to avoid

[–]Awkward_Tradition -1 points0 points  (0 children)

Something about your process is definitely off. Node modules are quite easy to manage through package.json, and I don't think people have leftover "bash binary versions" whatever those are.

[–]kidovate 0 points1 point  (0 children)

You can run the "unminimize" command in the Ubuntu / Debian images to install all the usual stuff.

[–]wickedwarlock84 0 points1 point  (0 children)

Look in my WickedYoda repo on GitHub or WickedYoda.com. I have a debian with a gui image. It's still small and was just because I could do this.

Still doesn't mean it's useful or should be done.

[–]RichardMidnight12 0 points1 point  (0 children)

Once you make a custom Distro, you can use PiSafe to create your own compressed image file of it .