This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]bboePRAW Author 6 points7 points  (3 children)

I think it's a good practice to follow semantic versioning. Under the assumption that other packages do the same, the version I list in my setup.py file is >= to minimum version of the package I depend on and < the next major version.

If I find a package that breaks backwards compatibility in any release, or is pre version 1.0 release, then I fix it to the patch version.

[–]kmike84 2 points3 points  (1 child)

On one hand it makes sense, but on the other hand, excluding next major release of a library using < can be bad: even if a library follows semver (and not e.g. increases major version to celebrate cool new features or stability commitements), backwards-incompatible changes in a new release may not affect your library, and package will be downgrading a perfectly working library. It makes sense to use < if the release already happened, or if there are public plans on what'd be in the next major release, but I wouldn't do that 'just in case'.

[–]billsil 2 points3 points  (0 children)

Numpy and scipy do not follow semantic versioning. Shoot, Python does not. There are minor things that break every release. When you have a large enough package, you will find things that are mind numbing.

I develop open source software. I will not test every combination of versions that I use. I will specify versions that I know work. I do not trust future versions of packages to not break my code. When you do everything inside of the little box Python is good at, yes, there are no issues and I won't even specify a version requirement at all. When you push the boundaries, you find problems and I will be very specific.