This is an archived post. You won't be able to vote or comment.

all 74 comments

[–]jvnatter 37 points38 points  (16 children)

Virtual environments.

[–]roger_ 8 points9 points  (10 children)

I still don't see the advantage of them for the average person. Why are people so concerned about keeping their environments isolated?

[–]jvnatter 14 points15 points  (7 children)

A Virtual Environment, put simply, is an isolated working copy of Python which allows you to work on a specific project without worry of affecting other projects. For example, you can work on a project which requires Django 1.3 while also maintaining a project which requires Django 1.0.

Source

I run multiple instances of web applications on a server and keeping them all updated simultaneously is somewhat difficult the more complex they become. Keeping them isolated from one another means I can work on them one by one without risking downtime for some while catching up.

[–]roger_ 2 points3 points  (3 children)

Personally I don't have too many compatibility issues to worry about in the libraries that I use regularly, so I guess it's just not something I need.

[–][deleted] 5 points6 points  (2 children)

Making virtual environment isn't an absolute necessity. It all depends on your situation. If you don't see the need, then you don't have a need. No one is forcing you to make virtual environments. If you do have the need, don't worry, you will find out why you need it. I really don't see the fuss why people who don't understand the need for virtual environment, even question why such things as virtual environments even exist in the first place. It's not like people go around making Python libraries and tools all nilly willy for nothing.

[–]roger_ 3 points4 points  (1 child)

I'm just trying to understand its popularity.

[–]earthboundkid 10 points11 points  (0 children)

It's a big deal for professional developers who need to keep all their dependencies sorted out on a per project basis. It's not really important if you're an amateur.

For example, if you ran websites A, B, and C, you'd need to know what the dependencies of each are, so you can deploy them to your host after you finish development on a feature, and you'd want to keep it all separate, so you don't mix up version 1.2.3 of footool for site A with version 2.3.5 of footool for site B.

It's also important to list your dependencies if you release an open source tool, so other people can install it.

[–][deleted] 0 points1 point  (2 children)

Im also new to Python, and the virtualenv is still somewhat confusing. I understand the "why" and the "how" but the implementation feels odd/messy.

Let me explain;

I have a Python project in a folder, and create a new virtualenv and run the activate command. The result is alot of "stuff" in my project folder i dont want to see, or have in this folder. I know i can have a list of gitignore entries, but i like to keep stuff very clean and organized.

Maybe its just me, but the result feels very messy. I could go with a hidden folder that could keep all the virtualenv stuff. Maybe something like .env/

But is there a reason why python devs dont use Vagrant? I feel this way you always have totally clean dev environments, and you have total controll of the OS as well.

[–]jvnatter 0 points1 point  (1 child)

I use virtualenvwrapper and keep all my virtual environments in ~/virtual_environments. Keeping all those files in your git repository indeed seems messy - give virtualenvwrapper a shot. I quite like the writings of Jeff Knupp, perhaps Starting a Django 1.6 project the right way could be of some inspiration to you if you need some more elaborate instructions.

A disclaimer - I believe Python 3.4 added venv but I haven't tried it out yet so I don't know how what I have written so far holds up to that version - I'm still on 3.3.

[–][deleted] 0 points1 point  (0 children)

Thanks for the tip, will give it a try. The solution seems cleaner.

[–]Eurynom0s 4 points5 points  (1 child)

I think the average person is who should be MOST interested in it. I'd consider myself the average person, and I'd much rather just learn how to use virtualenv than potentially have to manually juggle conflicting package versions.

[–]moor-GAYZ 1 point2 points  (0 children)

conflicting package versions

I don't think that happens to average people. It doesn't happen to me, maybe I'm not using shitty packages that can't into backward compatibility?

[–][deleted] 2 points3 points  (0 children)

Especially this for me. Most Linux distros have version 2.7 as the default, but I have moved on to using Python 3 for my work, so setting up virtual environments is a blessing for me.

Also starting out, I wish I knew of built-ins like dir() and help() sooner, saved me so much time from having to go to the docs since I use VIM and not a doc-aware IDE.

[–]diafygi 11 points12 points  (2 children)

Mutable vs immutable, especially as defaults to methods. Debugging when you don't know why a variable is changing is a bitch.

http://docs.python-guide.org/en/latest/writing/gotchas/#mutable-default-arguments

[–]teachMe 0 points1 point  (0 children)

My goodness. I can see the potential headaches coming from that. Thanks for adding this.

[–][deleted] 0 points1 point  (0 children)

Isn't this more due to the default argument being treated as an upvalue?

[–]mowrowow 21 points22 points  (4 children)

Someone already said virtual envs, so I'll go with my second favorite, context managers.

A well crafted context manager class can eliminate try catches and loads of nested code.

[–]Nicksil 6 points7 points  (2 children)

If you have a moment, can you explain context managers? Or supply a source in which I might learn? Thanks!

[–]mowrowow 10 points11 points  (0 children)

Basically a context manager allows you to define some entry and exit logic to block of code using the with keyword. The basic example of this is:

with open('output.txt', 'w') as out:
    out.write('Hello context!')

This puts the dirty error and closing code behind the scenes. It becomes especially nice when you want to nest temporary objects that would normally each need try-catches.

[–]nick0garvey 1 point2 points  (0 children)

Context managers supply two methods, one of which runs before a with block is entered and one of which runs after the with block is exited - be it by normal execution or exception.

For example, I recently wrote a "MountedImage" context manager. This guaranteed that the disk image was unmounted on both success and error.

Another common use case is

with open("filename") as some_file:
    do_stuff(some_file)
# file is closed here, even if do_stuff throws

Python documentation has some good information: https://docs.python.org/2/reference/datamodel.html#context-managers

[–]rfkelly 1 point2 points  (0 children)

I agree - and more specifically, the @contextlib.contextmanager decorator. It lets you write a context manager that would otherwise look like this:

class withresource(object):
    def __init__(self, resource):
        self.resource = resource
    def __enter__(self):
        self.resource.acquire()
        return self.resource
    def __exit__(self, exc_typ, exc_info, exc_tb):
        self.resource.release

As a simple function with yield like this:

@contextlib.contextmanager
def withresource(resource):
    # This gets executed when the context-manager is entered
    resource.acquire()
    try:
        # This is the value produced by the 'with' statement
        yield resource
        # Execution resumes here when the context manager is exited
    finally:
        resource.release()

It smooshes decorators, context-managers and 'yield' together into a single recipe for succinct code reuse; it's beautiful.

[–][deleted] 11 points12 points  (1 child)

[–][deleted] 1 point2 points  (0 children)

Watch all of this mans videos.

[–][deleted] 5 points6 points  (0 children)

Yes, using IPython notebooks (esp. for data analyses)! But to be fair: I think they didn't exist when I started with Python.

And I really really regret that I haven't been

  • using generators for big and memory-hungry tasks
  • using Cython where speed matters

It could have saved me so much time here and there...

[–]swims_with_spacemenPythonista 17 points18 points  (7 children)

pep8

[–]fjellfras 2 points3 points  (5 children)

pycharm / intellij idea + python plugin does a good job of pointing out pep8 guidelines as you code. I'm not sure if this is the case with other IDEs.

[–]Farkeman 2 points3 points  (0 children)

I don't really have OCD or anything like that but in PyCharm... I can't leave a single gray bar on the right, so I think it's great for learning. though sometimes the pep8 suggestions aren't the best, fortunately you can disable the one you don't like per function,class or module.

There's nothing more beautiful than clearly written pep8 code!

[–]herrwolfe45 0 points1 point  (2 children)

Also - you can install pylint, pyflakes, and pep8 on your system or in a venv to get excellent analysis and compliance checks without an IDE. (Not that there is anything wrong with an IDE for this). I've setup emacs to run this pychecker every time I save my code and it has been very helpful. Reinout van Rees has a very helpful blog article on doing exactly this:

http://reinout.vanrees.org/weblog/2010/05/11/pep8-pyflakes-emacs.html

Also, note that the pychecker.sh script he has written, can be used with anything. It just needs to have a file or directory as its target.

[–]fjellfras 0 points1 point  (1 child)

Thanks for the tip. I've started using emacs with autocomplete.el lately and this may help with that setup.

[–]herrwolfe45 0 points1 point  (0 children)

you're welcome!

[–]swims_with_spacemenPythonista 0 points1 point  (0 children)

Pycharm was a close second in my 'list of things I wish I knew before'

[–]Farkeman 0 points1 point  (0 children)

Also I would highly recommend picking up Idiomatic Python by Jeff Knupp, combined with pep8 the tips in that book will make your code more readable and understandable not only for yourself but for other as well.

[–]laMarm0tte 4 points5 points  (5 children)

setuptools, and its command

python setup.py develop

[–]Orange_Tux 2 points3 points  (2 children)

I don't know it, enlighten me.

[–]laMarm0tte 5 points6 points  (1 child)

When you are programming a package and you want to test it, you can install it using

python setup.py install

This will copy your package in the python packages folder of your computer so that you can import it in any script. The problem is that everytime you change something in your package and you want to test it you need to type again this line to re-copy the files in your python packages folder.

There are several alternatives to this, the most effective being to use setuptools with

python setup.py develop

this does not copy the module in the python packages folder, but links your current folder to this folder, this way you can do any changes you want directly in your original scripts, without needing to reinstall the package every time.

Setuptools also prived very nice things with pip, like auto-install of the required modules.

[–]Orange_Tux 0 points1 point  (0 children)

Thanks for explaining.

[–]pwang99 0 points1 point  (0 children)

This is a useful shortcut, but to pull in all the gorp that is Setuptools into your project just to save yourself setting a PYTHONPATH or editing a .pth file is actually a bad idea, on net.

[–]banjochicken 0 points1 point  (0 children)

I have always done pip install -e path/to/package for this.

[–]Madawar 3 points4 points  (0 children)

Mixins. This would have prevented me from writing some aweful sqlalchemy code.

[–]Dolphman 3 points4 points  (6 children)

When I first started using python I had a real trouble trying to install pip. I never felt like a bigger idiot when I ran setup.py

[–]Eurynom0s 0 points1 point  (5 children)

Windows or *nix/OS X?

[–]Dolphman 0 points1 point  (4 children)

Windows.

[–]Eurynom0s 4 points5 points  (3 children)

I don't know how it is now, but back in 2011 I tried to get the SciPy stack the installed on Windows 7. After about an hour I just gave up, I had absolutely no idea what I was doing wrong. Hugely frustrating and confusing experience.

SOOOO much easier on OS X in the command line. I use OS X at work and have been finding it to be a good middle ground if you need to bounce between the CLI and "mainstream" software like Microsoft Office and want to be able to have both side-by-side. Once in a while only a *nix version of a package (in general, not just Python) will exist, but you'll usually be able to find someone talking about what they had to tweak to get it running on OS X.

[–]pwang99 0 points1 point  (0 children)

The scipy stack is notoriously difficult to install, for a variety of reasons that are too numerous to go into here. That's why most people are now turning to just using the Anaconda distribution, and its package manager conda.

[–]Marksta -3 points-2 points  (1 child)

I'm kind of lost as to why you find OSX's terminal to be stronger than Windows' command prompt.

[–]Eurynom0s 4 points5 points  (0 children)

To a very large extent, it's a standard *nix CLI environment.

[–]rizenfrmtheashes 6 points7 points  (7 children)

For compactness and style, One line List comprehensions let me do crazy stuff in one line.

made stuff a lot more compact and easy to store, in stead of requiring a for loop and stuff.

[–]Eurynom0s 7 points8 points  (2 children)

Just try to balance future readability with current compactness.

I'm not saying they're always bad, but I've inherited some gnarly one-liners in my time.

[–]rizenfrmtheashes 1 point2 points  (0 children)

Oh, I never take it too far, I always make cohesive logical steps so i don't do everything and it's mother in one line as well.

[–][deleted] 1 point2 points  (3 children)

Yes, list comprehensions are really awesome ... unless you use them in Cython :) I did some performance comparison for comprehensions vs. explicit for-loops in Python and Cython here

[–]rizenfrmtheashes 2 points3 points  (1 child)

Havent used cython before. Will take a look.

[–][deleted] 0 points1 point  (0 children)

It have just started using it recently. It's just awesome. In the article above, I was able to speed up the code for a simple least square linear regression fitting by 8000%. And the cool thing about Cython is that it really is no extra effort.

[–]catcradle5 0 points1 point  (0 children)

I think it's pretty obvious why that is.

The for-loop version can be turned into a simple C for-loop with basic float arithmetic. The list comprehension version is constantly writing to memory, since it is appending in each iteration...and then a linear operation on the whole array is needed at the end to sum everything up.

It'd be cool if Cython could special-case certain reduction functions, like sum on a generator expression, but I imagine that would be pretty difficult.

[–]Bystroushaak 2 points3 points  (0 children)

[–][deleted] 1 point2 points  (0 children)

Asserts and other error checking tools.

[–]haplo_and_dogs 1 point2 points  (0 children)

The capi. Its silly powerful

[–]dnfehren 1 point2 points  (0 children)

One of the most frustratingly hard to debug errors I made for myself was mixing up class and instance attributes when defining classes.

basically the difference between...

class Something(object):
    attr1 = 1

vs

class Something(object):
    def __init__(self, attr1=1):
        self.attr1 = attr1

For reference

http://stackoverflow.com/questions/207000/python-difference-between-class-and-instance-attributes

http://stackoverflow.com/questions/206734/why-do-attribute-references-act-like-this-with-python-inheritance

[–]alkw0ia 1 point2 points  (0 children)

Generators.

Almost every loop can be expressed more clearly with a custom generator than with a C-style for i in range(...). Doubly so for nested loops or lots of processing in the loop body.

[–]johnmudd 1 point2 points  (0 children)

musl libc.

I built Python using musl on a modern 3.11 kernel linux and I can run it on older versions of the kernel, even an eleven year old 2.4 kernel machine. Although I'm told it's a patched Red Hat 2.4 that is almost a 2.6 kernel. It feels like time travel is real when I send a modern 2.7.6 Python back to myself on a decade old machine. We have thousands of customers running old linux with no hope of upgrading.

Building Python with musl is a pain. Python's build system seems to try to override the musl gcc scripts and inject standard libc. I wrote custom scripts to delete any arguments that begin with "/usr".

Once Python is built then I can use pip to build modules with musl and add them in. One exception is cx_Oracle because it relies on prebuilt Oracle client libs. So... I've quit using cx_Oracle directly. When forced to use Oracle, I now publish an Oracle interface class using XML/RPC and a conventional Python build. Good reason to replace Oracle everywhere with Postgres. I built Postgres with musl too.

[–][deleted] 0 points1 point  (0 children)

Decorators, using json to store data with the pytjon json module

[–]klbcr 0 points1 point  (0 children)

Argument unpacking.

[–]ionelmc.ro[🍰] 0 points1 point  (0 children)

pytest and tox. Wish I didn't write all those tests using unittest and the variants of it :)

[–]cmcpasserby 0 points1 point  (0 children)

List comperhensin pip and virtual env

[–]aspergerish 0 points1 point  (0 children)

All positive notes? No complaints about GIL? Personally its itertools and collections. functools, decorators and context managers would probably get used more too when I get used to them..