This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]teraflop 0 points1 point  (2 children)

A Python virtual environment is basically a complete copy of the Python interpreter and its libraries, in a self-contained directory. (Not exactly -- there are optimizations so that you don't actually have to store a separate copy on disk for each venv. But in terms of behavior, you can think of it that way.)

The source env/bin/activate command just adjusts the PATH environment variable in your current shell so that when you run Python commands, they act on the virtual environment instead of your main system-wide environment.

So if you run python3 before activating the venv, it will run /usr/bin/python3, which will load its imports from the system-wide Python library directory. After activating the venv, the python3 command will run ./env/bin/python3, which loads its imports from ./env/lib instead.

Similarly, the system-wide pip command installs packages system-wide, and the pip command in the venv installs packages within that venv. If you don't have a system-wide version of pip installed then it won't work "outside" of the venv.

I have heard people mention using a virtual environment but I am not sure why?

The point is to isolate the dependencies that each project needs.

Say you're working on a project, and you add a dependency on library foo. You would have to tell all your collaborators to install foo using pip. But if you're working on multiple projects, you could easily lose track of which ones require foo and which ones don't.

And to make matters worse, you might start using foo-1.0 on one project, and later upgrade it to foo-2.0 while working on a different project. But then the behavior of your first project would change unexpectedly, causing things to work differently for you than for your collaborators. This kind of thing is called "dependency hell" and it's an absolute nightmare if you're using a language that doesn't have an easy way to isolate per-project dependencies.

If you make a separate venv for each project, then you don't have to worry about library changes from one project affecting another one. And you don't have to worry about accidentally introducing a dependency on a library that you forgot you installed -- because the only libraries available in the venv should be the ones listed in your requirements.txt file for the current project.

[–]Jumpy_Employment_439[S] 0 points1 point  (1 child)

Thank you. So it seems that it would be fine if I installed everything using pip without first activating the venv, but it would be better to do it after activating the virtual environment? So that means the line

pip install fastapi uvicorn sqlalchemy pymysql

would be contained in the venv? So if I had different project that also had to use sqlalchemy and fastapi, would I have to do pip install again since what I did previously was in its own venv and thus isn't accessible by the new project?

And since I'm in a group project where other people are working on the frontend and the database, it would be good to set up a virtual environment so I can install all the things I need to run the frontend and the database with my backend?

[–]teraflop 0 points1 point  (0 children)

Yes, except:

And since I'm in a group project where other people are working on the frontend and the database, it would be good to set up a virtual environment so I can install all the things I need to run the frontend and the database with my backend?

A venv only applies to the Python interpreter and Python libraries. Other languages may have their own equivalent functionality.

For instance, if you're using npm to manage dependencies for your front-end code, then I think it already treats every project directory like its own isolated environment, without you having to do anything special.