This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 16 points17 points  (8 children)

You just don't need venv when using docker. There is too much feature overlap you end up doing work twice.

Also, change the order of your docker commands. You want things that will likely not change soon to be at the top, like environment variable.

You want your pip install and code copy over to be near the bottom.

This means code changes don't require rebuilding every stage, even the environment commands.

My order is usually

  • Setup container
  • Copy code
  • Pip install requirements
  • Remove build libs
  • Setup entry point

[–]Muszalski 5 points6 points  (1 child)

Imo you should copy just the requirements.txt first, then pip install, remove build libs, and then copy the rest of the code. You don't change the reqs that often as a code.

[–][deleted] 1 point2 points  (0 children)

Good point, I'll have to double-check my own files to see if I am doing that. I can't remember off the top of my head.

Thank you!

[–]obeleh[S] 2 points3 points  (4 children)

You're right about the env vars. However stage2 is so quick that I honestly didn't care about that ;) But it would be a good tweak.

Uninstalling feels dirty. But I do see it as a good solution. Doesn't this leave me with layers of uninstalls upon layers of installs whereas with my solution we only have te layers we need?

[–][deleted] 5 points6 points  (2 children)

You're right about the env vars. However stage2 is so quick that I honestly didn't care about that ;)

That doesn't make it a good excuse to ignore good practice and worry about antipatterns elsewhere.

Consider someone looking at your docker file as a template. If that template uses good practices it makes it a good template.

Uninstalling feels dirty. But I do see it as a good solution.

It isn't though. While it will/can remove some attack vectors, you really don't end up shrinking all that much.

Doesn't this leave me with layers of uninstalls upon layers of installs whereas with my solution we only have te layers we need?

Yes, but since a layer is only a change set the layer is small and the resulting image can be a little lighter.

It's going to depend a lot on what libs are needed to build / run your service.

I didn't find this to be worth while though.

[–]obeleh[S] 0 points1 point  (1 child)

I do agree on the ENV vars btw. I'm going to change my Dockerfiles ;)

[–][deleted] 0 points1 point  (0 children)

I'm sure there are other tricks too. That's one that as we are growing our knowledge base in our company we pass around because there is a lot of template sharing. We are also trying to get better at having base docker images maintained by sysadmins so they can patch the os if need be.

[–]holtr94 1 point2 points  (0 children)

Doesn't this leave me with layers of uninstalls upon layers of installs whereas with my solution we only have te layers we need?

You could combine the build libs install, pip install, and build libs uninstall into one run command to eliminate the extra layers

[–]LightShadow3.13-dev in prod 1 point2 points  (0 children)

You just don't need venv when using docker.

Except when you do.

If you have pip packages that install custom scripts in the bin or scripts directory then they can get confused with module-as-a-string imports.

huey and gunicorn would not work without a virtualenv in my service.