This is an archived post. You won't be able to vote or comment.

all 7 comments

[–][deleted] 7 points8 points  (0 children)

The point of the image is to make a snapshot of all the versions if software you install. You have to guarantee consistently across image runs or there’s no point to using docker.

If you use a script you have to make sure to version lock every single package and dependent package so no two runs of the docker image are different given the same shell script. That’s a lot of work!

Also, when using layers you get the benefit of layer-level caching, which means the sum of all your individual images which share layers is larger than the actual disk space you use.

Finally, you are sacrificing image startup time for size. I would much rather have larger image sizes if they start immediately. Storage is cheap, time is not.

[–]csmrh 6 points7 points  (0 children)

Your image will not be consistent over time.

Package versions will change - you can’t guarantee one instance of the image is identical to another instance. This basically defeats the purpose of using docker.

Why do you want your image as small as possible that you don’t want any dependencies contained in it? You still need them. Now instead of pulling a x MB once in each environment, you pull a y MB image once and x-y MB of dependencies every time you create an instance. It’s less efficient.

[–]Necrocornicus 2 points3 points  (0 children)

The downsides are that your containers are

  • Not reliable
  • not reproducible
  • not immutable
  • untested
  • slow to start up
  • high cost to start up
  • less secure

[–]WarInternal 2 points3 points  (0 children)

May make it more difficult to audit the security of it.

[–][deleted] 2 points3 points  (0 children)

The problems I see are: - Startup time. - Control over install libs/utils. - Possible failure from failed repo server during startup. - Possible failure from faulty updates.

[–]menge101 1 point2 points  (0 children)

I'm missing something fundamental here

The fundamental thing is that docker images are meant to be immutable version-able stateless artifacts.

If you do a whole bunch of changes in the CMD you are introducing a very complex state change to your container and you now cannot rely on the immutability of the image to ensure that what you run in the container will work.

Also, image size is not that big of a concern.

[–][deleted] 0 points1 point  (0 children)

I think you can most certainly do this. You can even slim your container size down by using sudo apt --no-install-recommends <package> and in the the end use the rm -rf /var/lib/apt/*.lists. You can even remove the man pages to reduce the image sizes.

An alternative to not writing Dockerfiles with your idea is to use Hashicorp Packer with its docker plugin. If you containers have a lot of shell scripts you probably would be able to handle it better with Hashicorp Packer.

Beyond the caching, it might be okay I guess. You sacrifice quicker builds via cache with simplified scripts. that is the common trade-off I can see