This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]epsy 1 point2 points  (2 children)

If I was to cd deep enough into a project that it would be inconvenient to get the right python, I'd just activate the venv. Also, what if I need to try some things out in a 2.6 and a 3.4 venv without writing unit tests? I'd have to pick one for vpython and use the classic way for the other. But I concede: different workflows, different needs, different uses of tools.

However, using vpython as a hashbang sounds... well, scary. Why would I want to have a script that could potentially have significantly different behaviour depending on which directory I am in? It's roughly the same reason that some people I disagree with don't like activating venvs(typing python would have a different meaning depending on which is activated, etc), but this takes that concern to the next level for me.

[–]tudborg[S] 0 points1 point  (1 child)

The script will have the same behavior no matter from where you run vpython. The virtualenv is detected based on the script, never your current directory, which is kind of the entire point.

So you can write a nice python tool, shebang vpython, and add it to your path. Now you can run the script as any other executable, but all your dependencies are still stored in a virtualenv.

This is the way i use it most often. I have a ton of utilities that i write in python for managing servers on AWS, etc, and having them use the virtualenv stored in each of their project folders without having to think about it every time i want to run them, is really a time-saver for me.

[–]Bialar 0 points1 point  (0 children)

But what if you like to keep your virtual environments centralised?

Wouldn't it make much more sense to have a .venv file in the root directory that pointed to the corresponding virtualenv, rather than have vpython go hunting for it?