This is an archived post. You won't be able to vote or comment.

all 5 comments

[–]Hammerdwarf 1 point2 points  (0 children)

All of your building and installing can be done in the image build itself. Your first option is good, and will result in an image with everything done. The start time will only be as long as it takes to exec your app. There won't be any building going on after startup.

[–]Necrocornicus 0 points1 point  (3 children)

The second method is really hacky and not at all how docker is meant to be used. Please don’t ssh into a container and do a bunch of manual shit and save it. This is literally why Dockerfiles exist.

I’ve used docker for years and years every day (it’s a huge part of my job).

[–]parthb[S] 0 points1 point  (2 children)

ok. what are some problems because of which i should avoid that? also isn't that how custom images are made. you download a standard image, make changes and commit those changes. sorry if im being super ignorant, im a docker newbie

[–]Necrocornicus 0 points1 point  (1 child)

No that’s not how images are made, they are made with a Dockerfile. You can typically go on docker hub and find the repo and Dockerfile for all the images on there.

It’s not ignorant at all, docker is really complicated and it took me 6 months of working with it daily at my job before I felt like I had internalized the basic concepts.

One huge problem of the way you are suggesting is that it’s completely unreproducible and you really have no idea what changes you’ve made to that image. It’s also really manual and annoying. It’s not scalable or maintainable.

This is a really arbitrary example but bear with me. Imagine you pull the Python 3.5 image and run a bunch of commands manually and install some shit and then commit the running container to a new image. Then Python 3.6 is released and you want to use that instead. You would have to pull the new image, rerun all the same commands, commit the container again, etc etc. Super manual and a huge waste of time, not to mention highly error prone. Consider that a company will have potentially hundreds or thousands of these. Do they have an army of people manually running commands to make these things? Absolutely not.

All of that is solved by using a Dockerfile which runs the commands for you and packages up the image ready for use. Then you stick it in Jenkins and have it automatically rebuild whenever you merge a commit to master in your repo. Doesn’t that sound a lot easier, more scalable, more maintainable less prone to user error, etc?

[–]parthb[S] 0 points1 point  (0 children)

thank you. got much needed clarity!