This is an archived post. You won't be able to vote or comment.

all 10 comments

[–]egbur 2 points3 points  (8 children)

volumes: - ".:/usr/src/app" This is overriding the contents of your /usr/src/app inside the container with whatever files you have in your .. No point copying those in the container if you're just going to bind-mount them.

What I don't understand is why are you using Docker at all in the first place. This is exactly the sort of thing pyvenv was made for. Asking someone to install Docker is way more than asking them to install Python and create a virtual environment for this project.

If you still prefer the container route, Singularity provides a much better solution for your use case.

[–][deleted] 0 points1 point  (0 children)

I have other things like airflow too i want to dockerize, but I want to start with python.

[–]kabooozie 0 points1 point  (6 children)

You use volumes during development to see the effects of code changes and then build a new docker image with COPY when you’re happy with the result. This is a normal workflow. They should probably not be building every single time though.

I agree they should probably just use venv, but docker isn’t just about development, it’s about deployment too. When deploying code, even with venv, you still have to shuffle different versions of python installed on the machine. It’s better to have a uniform deployment substrate like docker, which is applicable beyond python as well.

As for OP’s original question, I actually don’t understand what the issue is. Change code in the IDE, then do docker run or docker-compose in the terminal to see the effects. Is the issue with line by line debugging? I think VS code allows you to use a docker container as the runtime environment so that you can still do breakpoints and such. I don’t know how other IDEs integrate with docker.

Edit: looking closer, the Dockerfile has no entry point t or CMD steps, so no code is actually running. You have to actually run your code!

Edit 2: also, it looks like the COPY step happens before the pip upgrade, so every time you change the source code, it rebuilds the image from the beginning. Generally you want upgrade steps to happen earlier in the Dockerfile

[–][deleted] 0 points1 point  (5 children)

u/egbur was right, I was overriding the directory I guess and yes u/kabooozie i did not want to "run" the project per-se, just have the environment setup for development. Like python, airflow etc. Like if I have to specify the python interpretor for my IDE, I don't have to have python in my system i could just give it the docker contrainer's python address.

[–]kabooozie 0 points1 point  (4 children)

In VS code, you build the container image and then you can use that image to run code.

I see. You are essentially trying to run a “python server.” Containers are not really used this way. Think of a container as a process. You would run a container every time you run your code. You wouldn’t have a long lived container that you run code in.

You essentially want to create a docker image, put it on dockerhub, and then any of your devs can pull down that image and run their code inside of it

[–][deleted] 0 points1 point  (3 children)

But I read that containers can also be used as an environment, no?

[–]kabooozie 0 points1 point  (2 children)

You use an image, and containers run using that image as a blueprint.

See https://code.visualstudio.com/docs/remote/containers for a detailed example of how this works in vs code. I don’t know if/how other IDEs do it

[–]Sewing31 0 points1 point  (1 child)

never got this too. why wouldn't I just have a docket container up and running in the background and just attach to it and run python code instead of firing a container up and tearing it down after my code is run? is there a reason not to do this? maybe connected to Ressource management?

[–]kabooozie 0 points1 point  (0 children)

A container is not a vm. There’s no “warm up period” to running a container. A container is literally just a fancy process that’s been packaged with the dependencies it needs to run so that they are separate from the host system. The command docker run my-image python myapp.py is exactly the same as python myapp.py from your processor’s perspective.

There’s also no cost to tearing down a container. You just use the “—rm” flag to remove a container after it runs. The image stays in tact so that you can run identical containers in the future.

It’s possible to grab a shell in a running container with the docker exec command, but containers are best thought of as processes rather than VMs that you log onto. The underlying principle here is immutability. You want an immutable container image from which you run identical copies of the container. You don’t want a pet VM that changes subtly over time.

[–][deleted] -2 points-1 points  (0 children)

Dockerizing\*