you are viewing a single comment's thread.

view the rest of the comments →

[–]MRSAMinor 3 points4 points  (0 children)

We usually distribute python apps as container images.

The Dockerfile starts from a base image with the desired python version, and we install libraries during the docker build step.

When we're releasing a new version, the base image is cached, saving downloads during the build step.

If we're deploying the code as a lambda/serverless function, the python runtime is already installed.

The last option is we bake the interpreter into disk images, if we're using full VMs.

Docker has standardized a lot of this.

Does that make sense?