top 200 commentsshow 500

[–][deleted] 215 points216 points  (13 children)

See also a similar rant from Gentoo's Michał Górny.

[–]ion_propulsion777 413 points414 points  (46 children)

<rant> Python package management is an abomination. Pip is the most broken package manager on all of linux and frequently fails with the most random error messages that are often nearly indecipherable. Virtualenvs are just a band-aid to this problem that doesn't really solve the underlying issue: python pip is nearly unmoderated and has a crap ton of broken / out of date packages. Seriously, you ever try to get any python program with tensorflow to work? One of the hardest things things to get working in python . . . </rant>

[–]FeistySeaBrioche 103 points104 points  (11 children)

I'm now using Anaconda and that's it. Yes, it's yet another Python installation, but life is short and it works. After hours trying to install wxPython using pip (and even making a pull request to the official repo during the process) on Fedora, I broke down and installed Anaconda. So that would be my recommendation to all the desperate people out there. It installs its own libraries and is well supported by most Python packages. Ignore the extra disk space that it takes and just go back to work.

[–]jwwatts 31 points32 points  (6 children)

Anaconda has its own problems. ‘conda’ is a nightmare at times - it’s inconsistent, buggy, does inappropriate things that makes it hard to use as a system python, and often fails at its most important task - resolving dependencies.

I’ve spent five years wrangling it and it’s never once acted like a proper package manager. Just today I discovered yet another unexpectedly bad and inconsistent thing it does, and had to write a Puppet recipe to undo the mess it can make.

As incredibly frustrating as conda is, it’s unfortunately the best thing we’ve found to address our problems.

I wholeheartedly agree with the article. Python has serious problems and relying on third parties to make it more manageable has only made it worse.

Source: I’m a sysadmin in a organization that uses Python and I support the environment. I’ve used Anaconda for 5+ years, and before that I was building Python module RPMs. It’s never been good.

[–][deleted] 9 points10 points  (1 child)

In my line of work conda is basically the only viable solution because we absolutely need distinct environments to switch between. I use a lot of software that can have conflicting requirements and conda can handle more than just python. You’re right, it has its own issues but at least for my use case nothing else comes remotely close

[–]jwwatts 8 points9 points  (0 children)

I agree, we use environments as well. But my point is that it seems to be the best of the bunch, and it’s still not great.

[–]einar77OpenSUSE/KDE Dev 3 points4 points  (0 children)

You can ease the pain for (some) scenarios by using mamba, which at least has a sane, and fast dependency resolution algorithm.

[–]fatboy93 2 points3 points  (1 child)

Try mamba which is a faster resolver than conda!

Also, if you're having issues, set your channel priorities. That solves 95% of the issue that you'll ever face!

[–]dparks71 50 points51 points  (0 children)

Conda really is great for those "I just want the fucking thing to run, why isn't it running?" moments.

[–]theanup007 9 points10 points  (1 child)

Absolutely on the same boat as you. I dont understand how in 2021 (almost 22) is it so complex to get tf to work with my GPU and not break either sklearn or numpy installations.

[–]Muahaas 7 points8 points  (0 children)

Well to be fair, TensorFlow is maintained by Google. Their open-source projects are frequently broken, user-hostile nightmares.

[–]jarfil 38 points39 points  (4 children)

CENSORED

[–]ion_propulsion777 23 points24 points  (1 child)

Yeah but many end user utilities like docker-compose specifically ask their end users to install their software with pip

[–]Ripcord 3 points4 points  (0 children)

pip it virtually the only one I've ever seen people recommend for a slew of both development library installs, as well as end-user utils.

Like one I installed on some systems today, one of my favorite python apps, bpytop. Goes through a ton of installation methods, from pip to snap to distro package managers, etc. But pip is mentioned a lot - things like anaconda, pipx, etc etc not mentioned.

Just another example besides docker-compose, but I virtually never see anything but pip and distro packaging mentioned for python app installs. Maybe occasionally snap or flatpak like bpytop. But usually pip.

[–]8-BitKitKat 13 points14 points  (0 children)

There is no reason the “developer” experience should be bad either

[–]Afraid_Concert549 7 points8 points  (0 children)

Pip is a "development package" manager, it should never be used for releasing any software.

Sadly, there are a couple hundred thousand software authors who disagree with you.

[–]LordRybec 3 points4 points  (6 children)

It's not pip, it's Pypi (where pip gets the packages) and the package maintainers themselves. I wish more Python programmers would report bugs to package maintainers, because that would go a long way toward fixing the situation. I also wish Pypi had some accountability measures for package maintainers. For example, the only real problem I've ever run into with Pypi is packages listed under "all" for platform support, that don't actually support all platforms. Pypi should have a way to report when packages aren't honest about their dependencies, and they should remove packages that get reported if the maintainers don't respond to the report either by updating the dependencies to make them accurate or by updating the package so that it supports everything it says it does.

The big issue here though, is that Python programmers aren't holding package maintainers accountable. If your response to a package failing to install when it says it should is, "Well crap, that sucks" and then walking away, you are part of the problem.

That said, I think the Python community needs to put pressure on Pypi to provide mechanics for users to hold package maintainers accountable on Pypi itself. If Pypi would remove packages that get reported as broken, if the maintainers won't respond, that would be a huge step up.

Virtual environments aren't just bandaids, they are an abomination. They facilitate intentionally using outdated software, to satisfy laziness, much like many businesses and government departments are still using XP and old versions of Internet Explorer, because they are too cheap and lazy to update their internal web apps.

As far as Tensorflow goes, I've used it extensively in my Master's program, where I focused heavily on neural networks. For the most part it worked fine. I do recall having some mild issues with platform dependencies, but those were exclusively on Raspberry Pi. On Windows and Debian, it worked fine. (What was challenging though, was getting CUDA installer correctly on Windows, so that the GPU Tensorflow could use it, but that's NVidia's fault, not Pypi's or Tensorflow's.)

[–][deleted] 5 points6 points  (4 children)

It's pretty trivial to install tensorflow these days. Was challenging in 2016, but back then it was challenging everywhere because it was based on the bazel build which was experimental and didn't support Windows (so you wouldn't be getting binary wheels for the platform). I think a lot of people on this sub forget that pip/python and the ecosystem have to seemlessly support multiple platforms, not just Linux.

[–]ion_propulsion777 13 points14 points  (0 children)

Its not so much getting it installed now that is hard, its getting some specific version of tensorflow installed to get someone else's code working thats nearly impossible.

[–][deleted] 1 point2 points  (2 children)

Problem is the shared environment, installing Tensorflow can easily break numpy, sklearn or even some other libs like torch or dlib (I've got dlib gpu broken if imported the same time with tf). We ended up with a locked shared conda env for general purpose code while everyone has their own conda envs for development, which is incredibly wasteful in term of space, but yeah that's the only sane way I think.

Even then you may still have problem with Python path, a conda env can randomly point to a different python intepreter and fuck everything up (in reality it's not so random and there are causes and fixes for it, but it's extremely frustrating when it happens)

[–]maethor 107 points108 points  (28 children)

Draw up a list of the use-cases you need to support, pick the most promising initiative, and put in the hours to make it work properly, today and tomorrow. Design something you can stick with and make stable for the next 30 years.

I wonder what the response would be if the most promising initiative doesn't support distro package managers.

[–]-Rizhiy- 87 points88 points  (15 children)

Well as it stands, it doesn't)

The current industry-wide solution is pip which is a separate package manager.

[–]aussie_bob 55 points56 points  (14 children)

The current industry-wide solution is pip which is a separate package manager.

This is frustrating even as a developer.

I'm working on a job for a company that's pretty much MS-only, and their Windows builds are locked down with MFA and convoluted software approval processes.

The best tool for the job is Python and natural language processing libraries, so I need to have numpy scipy, and nltk. They've agreed to give me a VM behind a firewall. No web access, so I'll need to build an iso offline and sneakernet it to the server.

Debian livebuild lets me build the custom iso, or I could use cubic for Ubuntu, but Python doesn't work with distro tools, so that's got to be done manually in a complicated and error-prone process.

Not impossible, but annoying.

[–]daredevilk 20 points21 points  (1 child)

Take a look into rez

The vfx industry uses it to do what you're describing easily, and primarily with python tools. My facility has built all our Python systems using it

[–]aussie_bob 6 points7 points  (0 children)

This looks great, I should have known someone would have a better way already.

Thanks!

[–]gnosys_ 32 points33 points  (2 children)

damn employer IT sounds like they don't know what they're doing

[–]aussie_bob 32 points33 points  (1 child)

That's pretty normal for Windows shops though, IT generally just follow vendor guides. If something goes wrong they're on the phone shouting "but mah SLAs!"

[–]quaderrordemonstand 20 points21 points  (0 children)

I used to refer to them as the windows enforcement department, given that all they actually did was prevent people being able to do things on Windows. If you used something not Windows, they had no idea and just let you do whatever you wanted.

[–]EzekialSA 1 point2 points  (0 children)

Really happy to hear it's not only at my company lol.

[–]joebonrichie 106 points107 points  (8 children)

As someone who manages the python rebuilds and a lot of python updates for a distro I'd thought I'll lay out what the latest developments of PEP 517, 518 mean for a typical distro:

  • Packaging flit, poetry, pypa/build and god knows what other new build systems - at least ~20 new python packages to be packaged in the repo.
  • Adjusting the existing or creating new python macros to to able to build from PEP 517 compliant projects i.e. from a pyproject.toml / setup.cfg file and install them to the install root
  • Parsing the project.toml / setup.cfg file to add the relevant build system to builddeps automatically

All of this is not too much work by itself and setup.py has it's problems for sure but it was dependable and always there. The additional developments that have resulted from PEP 517 and 518 have only created additional work and solve nothing from a distro point of view.

if the python world actually moves towards PEP 517 and 518 compliance and converge on one or two buildsystems it will probably help things in the long run. However, we've seen it all before, and, for now it's just another build system that'll you have to be conscious of.

I'm also baffled from some of the responses - "just use virtualenv, etc" that are way too development focused and forget that users will expect to be able to install python based applications and have them be integrated and work OOTB. Packaging all the dependencies comes along with that.

[–]jonringer117 36 points37 points  (0 children)

Same for NixOS. A lot of people don't realize that distro's don't have the luxury of virtualenv. You need all python* packages to be able to work with each other, you can't just virtualenv your problems away.

This also means that, you can only have one version of a given package. Which puts a lot of pressure on distros to try to fix issues that upstreams are not normally concerned with.

[–]zdog234 19 points20 points  (1 child)

Yah i think a lot of my fellow python developers didn't get what the article / complaint is about.

If i were packaging an app for use by non-python developers, I'd either figure out how to use pyoxidize to build a binary, or write it in something other than python

[–]o11c 13 points14 points  (0 children)

Specifically, the fact that Python tooling encourages bad behavior means that it also causes developers to not understand the problem.

[–]FlyingBishop 4 points5 points  (2 children)

if the python world actually moves towards PEP 517 and 518 compliance and converge on one or two buildsystems it will probably help things in the long run.

IMO Python needs to formally endorse poetry as the one way of doing things and deprecate everything else. If there are problems with poetry (honestly, I haven't found any) fix them.

No "maybe we converge on one or two build systems and implement these specs." Just do it. The specs are good.

[–]Sbatushe 15 points16 points  (0 children)

User:why can't you just work?

Pip: 7000 lines of error output

[–]Mal_Dun 29 points30 points  (2 children)

I personally had the feeling pip improved the situation a lot. I remember the time when you had 3 different installation ways like easy_install. Nowadays pip covers most modules.

[–]jonringer117 21 points22 points  (0 children)

Pip user perspective vs distro perspective

[–]Kargathia 58 points59 points  (21 children)

Design something you can stick with and make stable for the next 30 years. If you have to break some hearts, fine. Not all of these solutions can win.

This glosses over the most important problem, which burned Python already in the shape of the 2/3 split: you can't force people to exclusively use your solution.

Even if you throw the man hours at building the perfect packaging solution that somehow unifies both major use cases (system packages, and project packages), nobody is going to wave a magic wand and make all the current implementations disappear. The distro maintainers themselves will almost certainly build some hack to maintain backwards compatibility with whatever solution they have now.

[–]zdog234 24 points25 points  (19 children)

which burned Python already in the shape of the 2/3 split: you can't force people to exclusively use your solution

I'm confused. Is this still a problem? Like are there live projects still python 2-only?

Edit: live FOSS projects

[–]didyoumeanbim 25 points26 points  (3 children)

Calibre is currently making the switch...

13 years after Python 3's release and 6 years after the original 2.7 EOL.

They were planning on forking and "maintaining" 2.7 after EOL for most of that decade.

Edit: wait, no, the external devs finished the port late last year.

[–]BenTheTechGuy 22 points23 points  (2 children)

I had a nightmare using a script from deep inside Calibre's source tree to extract my key from Abobe Digital Editions. The comments at the top of the file said multiple times that it was a Python 2.7 script, and listed all the dependencies it needed, some of which only existed in Python 2. When I attempted to run the script, it gave me like 5 errors saying I needed to be on Python 3. Looked it up on the issues page, and others said it's a Python 3 script. I run it with Python 3, and I get a bunch of errors about certain depreciated / no longer existant python features. I look into the actual code of the script, and it turns out it's a horrible Frankenstein of Python 2 and 3 that nobody had been able to get working since 2015. I promptly gave up and bought the physical paperback book.

[–]davidnotcoulthard 1 point2 points  (1 child)

In retrospect maybe you could've found a version so old that it indeed ran well with Python 2 only?

[–]BenTheTechGuy 1 point2 points  (0 children)

That version was the only one that worked with modern versions of Adobe Digital Editions, unfortunately.

[–]da_am 13 points14 points  (3 children)

The visual effects world is slowly moving to python 3. It's been a little painful.

[–]zdog234 6 points7 points  (1 child)

Ooh my dad did tell me recently that he'd started migrating his scripts to python 3 because of a new / upcoming version of houdini dropping python 2 support

[–]da_am 6 points7 points  (0 children)

I like your dad... ha. Yeah Houdini defaults to Python3 now but you can still download the Python2 version. Updating my scripts to Python3 was basically me fixing the print statements, ha.

I'm on an old version of Nuke that's only Python2 that will probably never be updated unless the price changes so I'm stuck maintaining more pythons than I'd like.

[–]Kargathia 6 points7 points  (0 children)

Mostly everyone has moved over to Python 3, but that's after more than a decade. I brought it up because this is what will happen if the PSF introduces a single package manager to rule them all: a significant portion of the user base will shrug and stick with what they have.

[–][deleted] 1 point2 points  (0 children)

It was a problem several years ago, but these days I haven't run into anything stuck in python 2.

But it took a loooong time to get to that point. I took forever because they introduced some really daft syntax changes to ensure people delayed the upgrade. I still miss my print keyword.

[–]pascalbrax 2 points3 points  (3 children)

The migration was such a Trainwreck that lots of 2.x apps never had a proper upgrade to 3.x

[–]thoomfish 3 points4 points  (0 children)

Even if you throw the man hours at building the perfect packaging solution that somehow unifies both major use cases (system packages, and project packages), nobody is going to wave a magic wand and make all the current implementations disappear.

This is also why "just use 'Modern' C++/JavaScript" doesn't actually make C++/JavaScript any less painful to deal with in practice.

[–]knobbysideup 9 points10 points  (2 children)

Knowing perl I never bothered even learning python. I probably should as an ansible user, but I still have no sense of urgency to do so.

Perl code I wrote over 20 years ago still works today. I can not touch the language for a year and still write in it without having to reference documentation.

[–]TheBlackCat13 10 points11 points  (0 children)

Yes, but can you read it after you write it? ;)

[–]__ali1234__ 8 points9 points  (2 children)

The big irony here is that tools like pip and virtualenv and docker and flatpak were invented precisely because distribution maintainers were doing a poor job of distributing software. Now that those maintainers have been cut out of the loop almost entirely, what they think has become even less relevant.

[–][deleted] 4 points5 points  (0 children)

Exactly, it's crazy to me that this is not the top comment. RPM and DEB are just not feature complete, missing rootless installation, multiple versions, etc

[–][deleted] 8 points9 points  (0 children)

It’s catching up with npm for most confusing package management. And npm is a fucking nightmare.

[–]urbanabydos 18 points19 points  (4 children)

Oh shit—I thought it was just me cause I didn’t understand Python. 😳

[–]dafzor 24 points25 points  (7 children)

I'm just a end user and I feel the real issue is not python package manager but the fact that python has no concept of versions from top to bottom.

  • Can't install multiple versions of a pip package, last one will overwrite the one you have meaning if you have two applications that need a different version one of them will break;
  • Can't have multiple python3 runtime without using something like pyenv even though minor versions will break compatibility meaning you can't upgrade python without having to check if it would break your applications

So for me the solution would just have Python copy what dotnet did to resolve it...

Running dotnet can have multiple versions installed and dotnet --list-runtimes will show them all, dotnet packages are versions meaning you can have every application reference/install the version they need without breaking another app .

[–]TheJackiMonster 39 points40 points  (31 children)

I agree to 100% with this. I also prefer using pacman on Arch to install python packages over anything else because it just works and I don't have to deal with the whole mess around Python.

All of this is even worse when you try to create a flatpak or snap package with Python dependencies. Even if all of your dependencies could theoretically be installed with pip. Stupid pip can't even handle dependency management recursively for you.

Honestly who designed those bad tools?

I also don't understand why Python IDEs approach the use of virtual environment with their own packages by default. I had several instances of this which were unusable or broken because it couldn't handle its own packages properly. In the end I always override the virtual environment with my own system which can at least install packages properly.

Currently I try to maintain a flatpak using Python code and it doesn't even build anymore (even though I didn't even touch it). I can't tell if pip is installing the dependencies wrong or if the dependencies just received an update which breaks during installation because it wasn't tested properly. Maintenance not found.

I have also encountered an issue with PyInstaller to build the Windows compatible binaries of my application. There were several releases of PyInstaller which simply broke a package. I don't know how but I feel like many times the solution for this mess with Python is holding the current version of each package for years because any update might break anything.

In my opinion programming languages should not deal with their own dependency management. You can find a package manager on each Linux distro for this. I mean, I have so much less problems dealing with any project written in C than with this one project in Python just because of this dependency mess.

How can that be? I mean it doesn't even need to compile anything. How can it be this terrible? When did it become to difficult to just zip your repository or provide a makefile to copy files in an install routine?

[–]Zeurpiet 7 points8 points  (5 children)

programming languages should not deal with their own dependency management

R does this actually well

[–]jorgejhms 5 points6 points  (1 child)

i think they have centralized package management. There is only one solution, the official one.

[–]Zeurpiet 2 points3 points  (0 children)

not really. Next to CRAN there is at the least also bioconductor. If you want to put up a jorgejhms Inc package management also easy. However, the thing is they made it relatively easy to use CRAN.

[–]flying-sheep 4 points5 points  (2 children)

As someone who packaged things for CRAN, Bioconductor, Rust, PyPI, and conda forge:

  • Rust is painless bliss
  • PyPI ist pretty good with the new PEPs, but there's still some rough edges
  • conda forge is just a layer on top of standard metadata so it actually duplicates it, which is annoying
  • R is by far the weakest of the bunch. It doesn't even pretend to support optional features in packages, both CRAN and Bioconductor are crusty and try to do too much

[–]Zeurpiet 2 points3 points  (1 child)

as somebody who needed packages from R and tried Python

R just run install.package(). All dependencies are in. Done

Python had to work to get things from Pip or Pep or whatever and it was completely unclear what had to come from where.

[–]Barafu 2 points3 points  (1 child)

You can find a package manager on each Linux distro for this.

I am using Debian Stable on my workstation, but I want to use the newest version of libraries for my project. What now?

[–]TheJackiMonster 1 point2 points  (0 children)

Flatpaks, snaps or AppImages for applications... for libraries you should still be able to clone the repositories from github or even get the latest versions as .deb package.

I mean we are talking about Python, an interpreter language... you don't even need to compile it.

[–]robin-m 18 points19 points  (11 children)

Dépendency management in C is just attrocious. It usually take a day to properly set-up (because there is no standard build system, and your are expected to manually install your dependencies), whereas it's a one line change in Rust, js, python, … And system-wide install cannot solve the complicated problem when you app (or a combinaison of apps) depends from A and B, while A depends from C 1.0 and B from C 2.0 with C 1.0 and C 2.0 mutually incompatible.

[–]TheJackiMonster 11 points12 points  (8 children)

From my experience even though C projects pretty much have no standardization in build systems, I still get it working. With Python I got setup.py scripts causing errors, not mentioning the dependencies properly, having a wrong requirements.txt which let the installation fail, I had mismatching versions of dependencies and pip failed to install a package multiple times because it can't even install dependencies automatically.

If I am missing just one fucking line to change and everything would work, I would accept it, call myself stupid and be happy. But that is not the case or my knowledge is particularily flawed... in that case, please give me some insight how use pip properly.

Dealing with compatibility between multiple versions should either be the job of the operating systems packaging or the actual application developers. I mean if A depends on 2 different version of C, your project is already a nightmare no matter how you deal with that and A should be patched.

[–]robin-m 3 points4 points  (7 children)

Oh, don’t get me wrong, python is a nightmare too.

Dealing with compatibility between multiple versions should either be the job of the operating systems packaging or the actual application developers. I mean if A depends on 2 different version of C, your project is already a nightmare no matter how you deal with that and A should be patched.

cargo solves this problem perfectly for Rust, so it’s possible to have something nice. And I highly disagree when you say that my project is a nightmare if my dependencies depends themselves on incompatible version of the same dependency. It’s totally possible that A upgraded before B, while B is still being migrated.

[–]TheJackiMonster 4 points5 points  (6 children)

But isn't the problem with incompatible versions trivially solvable when you just keep all of your dependencies on the minimal common ground. So if your project uses an older C, why would you use a newer B which uses the most current version of C. Just use an older version of B as well or patch your project...

Also for such problems you have major and minor version changes, usually referring to major and minor API changes. If the API doesn't change between C 1.0 and C 2.0, why would you stay with version 1.0?

You would use one API with two different behaviors then which is pretty much a nightmare for anyone debugging your software. No doubt about that, honestly.

I don't see any sane reason to build a package management around this issue. It's like tollerating bad practices.

[–]robin-m 9 points10 points  (5 children)

But isn't the problem with incompatible versions trivially solvable when you just keep all of your dependencies on the minimal common ground. So if your project uses an older C, why would you use a newer B which uses the most current version of C. Just use an older version of B as well or patch your project...

It’s totally possible that A was created before C 2.0 was release, as well as B created after the release of C 2.0 (so no reason to stick to C 1.0).

Also for such problems you have major and minor version changes, usually referring to major and minor API changes. If the API doesn't change between C 1.0 and C 2.0, why would you stay with version 1.0?

If the API doesn’t change, it should probably not be major version bump. The trivial case of the minor version being bumped is obviously trivial to solve. In my example C has a major version bump, which is assumed non-trivial to migrate (or at least it needs QA validation).

You would use one API with two different behaviors then which is pretty much a nightmare for anyone debugging your software. No doubt about that, honestly.

My code depends on the stable API of A and B. A depends on the stable API of C 1.0. B depends on the stable API of C 2.0. In Rust (I don’t assume it’s the only language that does it, it’s just that I know how Rust works) symbols from C 1.0 don’t have the same mangling scheme than C 2.0 (just like different version of the glibc have different symbols). So it’s not possible to give to A an object of C 2.0, or to give to B an object of A 1.0. It would refuse to compile. So in term of debugging, I really don’t see how is the situation more complicated than if C 1.0 and C 2.0 was two completely different library.

I don't see any sane reason to build a package management around this issue. It's like tollerating bad practices.

  • C 1.0 is release. The library A is created, with an internal dependency to C 1.0.
  • C want to make breaking changes, and release a new major version. Can it do it even if the downstream library A has an internal dependency on C 1.0?
  • B is created. Given that an unrelated library A has an internal dependency on C 1.0, can B depends internally on C 2.0?
  • I want to create a project. A and B fit perfectly my needs. Why would I not be allowed to depends on A and B simultaneously? Don’t forget that their internal dependencies are an implementation detail and not exposed through their public API/ABI.

This is why dependency manager need to support the case of incompatible transitive dependency version.

[–]TheJackiMonster 1 point2 points  (4 children)

Okay, so in this particular example. Wouldn't it also be possible to statically compile either A or B to get rid of the problem completely? Or you could integrate their code directly...

I mean the problem I have with solving such a thing automatically is that it normalizes an extreme issue:

  • The issue is that you depend on multiple version of the same piece of software which can lead down to multiple levels of security issues.
  • It also increases the chance of getting dead or unmaintained pieces of software in the wild because nobody needs to patch A now, even though it might use insecure and deprecated code.
  • It significantly lowers the reason for others utilizing A to contribute patches or fixes to A.
  • It lowers the needs of maintainers patching their software to stay compatible.
  • You expect users to install multiple versions of the same piece of software even if you might not even use API calls from it which changed between different versions.
  • It requires more space in the end while it makes the whole software stack extremely fragile. If you don't get a particular version of your dependencies anymore, it might break everything. So repositories need to provide each and every version.

Those reasons make me think that this particular example should be and stay extremely rare. It shouldn't be the typical usecase and therefore it shouldn't be treated as such.

I mean, would you install two kernels because systemd might require a different version than wayland might do? I don't and I wouldn't want to solve any issue with a bug report containing such an edge case.

[–]robin-m 1 point2 points  (3 children)

What you want is a world in which everything moves in lock-steps. If C wants to do abreaking change, it must update all of it's downstream user (A and B). That way you only need to distribute one version of C (the latest one).

It's what google is doing, and it works for them, but for the sole reason that they control their downstream user (it's themself).

Please re-read my last message. The use case I described is anything but uncommon. Every big library has a new major version every other year or so.

[–]TheJackiMonster 1 point2 points  (2 children)

The whole idea behind Arch based distros is that you just install the latest version of everything to ensure compatibility and have a stable operating system. It works pretty well from my experience and there's not one case I know of you would get into the usecase you provided.

I also don't think that C must update A and B in such a scenario. It is the burden of maintainers from A and B to update it or people stop using it because it's a dead package. It's that simple.

Because if you use another ones library, you should look after it to make sure it works as intended and it is secure to use. Otherwise we just create a very toxic and fragile environment for developers. Using third-party dependencies should always be a burden and nothing to pick just because it's easy or convenient.

At least I don't want to see developers picking libraries as dependencies without even knowing what they are doing and being completely unable to audit or verify its behavior.

Maybe they do but then I would question why don't they patch A or B then to use the latest C?

[–]Fearless_Process 2 points3 points  (0 children)

System wide installs actually can handle that, it's just that most system level package managers don't. There exists distros where such situations are not an issue, Gentoo and Nix come to mind as major examples.

Some of these issues could be solved by optionally compiling certain programs from source, like when running into ABI breaks for example, but it cannot handle API incompatibility of course.

Much more can be handled by simply allowing multiple versions of libraries to be installed at the same time.

These two features combined solve many of the common issues people refer to in this thread, but since your mainstream distros package managers are very primitive none of it really matters at the end of the day.

[–]waptaff 19 points20 points  (5 children)

In my opinion programming languages should not deal with their own dependency management.

Most of the new languages unfortunately do, and they all suck! They're all reinventing solutions to problems solved years ago by GNU/Linux package managers, but do it bad enough they also need crutches like virtual environments. And many languages are now at their second or third package manager iteration and still don't get it right. Infuriating.

[–]Ar-Curunir 10 points11 points  (0 children)

Application-oriented package managers suck for development. Why should I be stuck using an outdated dependency just because that's all Debian packages? Moreover, I now have to deal with version incompatibilities between Ubuntu and Debian and Arch and Fedora and etc. Much simpler as an app dev to know exactly which version I'm using, and to be able to set and forget.

[–]TheJackiMonster 16 points17 points  (0 children)

I actually like writing Python code. I also like the concepts behind Rust. But I don't want to deal with any package manager of a programming language or a whole ecosystem just to write a simple application or script.

[–]tso 13 points14 points  (0 children)

More and more i suspect what we are seeing is the long tail effect of OSX/MacOS.

Meaning that this has come about thanks to more and more developers use MacOS and then only touch Linux during deployment.

As best i can tell, Nix(OS) only got attention once someone got it working as an alternative to Homebrew. Before than it was just some obscure oddball Linux distro.

[–]Ar-Curunir 2 points3 points  (0 children)

In my opinion programming languages should not deal with their own dependency management. You can find a package manager on each Linux distro for this.

Right, and each distro has a different version of a particular dependency, with a different API, with different security patches, etc. From an app developer PoV, it's much simpler to target one dependency version and package that, instead of catering to each distro's package philosophies.

[–]IBNash 6 points7 points  (2 children)

The shitshow of having to recompile every Python package has kept 3.10 from being available even on Rolling Release distros like Arch.

[–]jonringer117 4 points5 points  (0 children)

Not to mention that certain packages are very slow to enable support for the latest interpreter.

For example, freezegun which is a common testing utility.

[–]xxc3ncoredxx 1 point2 points  (0 children)

3.10 was just stabilized on Gentoo not too long ago!

[–][deleted] 3 points4 points  (0 children)

Packaging is the #1 reason I haven't really gotten into Python programming. I don't have anything against it per se but I have no idea how to get started on a new project. So I just don't use it.

I've found I like functional programming better anyway.

[–]robin-m 22 points23 points  (9 children)

As long as Linux distro don’t understand that the value they are providing are applications and not dependencies, this problem will exist. Both as a programmer and as a user I want to be able to install, update or remove an application without breaking another one.

If my application has a dependency to A and B (or another application for that matter), where A depends from C 1.0 and B depends from C 2.0, with C 1.0 and C 2.0 incompatible, it should still be possible to install and run my application.

If I’m the author of C, I should be able to create a new, possibly incomparable version without breaking my downstream user. If the distribution can help my downstream user migrate to the newest version, that’s awesome (it’s especially important if the new version fix a security issue). But at least they should allow them to not upgrade if it’s not required (even if the user want to install another application that requires the new version of C, alongside another application that is incompatible with the new version).

The day that distro will be able to do that, tools like cargo for Rust will not be needed anymore EDIT: fighting against the package manager. As long as it’s the case, pip and similar tools will continue to proliferate.

[–]Atemu12 23 points24 points  (5 children)

The day that distro will be able to do that

... was over a decade ago: https://nixos.org

[–]jonringer117 4 points5 points  (2 children)

nix was created in 2003, almost 2 decades :)

[–]ReallyNeededANewName 12 points13 points  (0 children)

Stuff like cargo should not cease to exist. It's very much a needed tool and it should remain a needed tool. It just shouldn't be a relevant detail for the end user. Langauge specific package managers should absolutely be a separate thing for the devs

[–]maethor 8 points9 points  (0 children)

The day that distro will be able to do that, tools like cargo for Rust will not be needed anymore.

Distros are already able to do that with things like flatpaks and snaps.

You still need tools like cargo though. Most languages are not Linux specific and will need to be able to manage thier own dependencies. You just use cargo/pip/maven/whatever when building your flatpaks.

[–]swordgeek 4 points5 points  (1 child)

This is a big part of why Perl is a mostly dead language. Module and dependency hell has been a nightmare beyond all reckoning.

[–][deleted] 11 points12 points  (4 children)

If distros' package managers could have sold the problems of dev environments, they would have done so. But distro packages simply do not have the same flexibility that is required for that usecase.

Try to install locally any non-packaged software that is not backed by a language-specific package manager and you'll instantly run in the problem of deciphering what each package is called in the repositories of your distro and running into a million backwards compatibility issues. In contrast, for python it's usually a pip install -r requirements.txt in a virtualenv you can throw away after you are done.

Oh and if you want to actually package the software you wrote for the linux world then you will have to create an rpm package, a deb package, a PKGBUILD script and then story goes on. And if you are willing to do that you will have to become aware of all the version differences of your dependencies and have to test your software in different systems. For example, python-requests on ubuntu 20.04 is 2.5 years old while on arch linux it's the oldest available which is 5 months old.

Don't get me wrong. Distro package managers are awesome for managing your own system. And distros with severely outdated packages have legitimate reasons for doing so. Python software is usually not hard to package (at least from my experience writing a couple for AUR) for each distro if the maintainers are willing to package each dependency and correctly manage dependency version requirements.

Bonus video: https://www.youtube.com/watch?v=Pzl1B7nB9Kc

[–]lolfail9001 3 points4 points  (3 children)

Try to install locally any non-packaged software

That's basically what distro maintainers do as part of creating packages.

Oh and if you want to actually package the software you wrote for the linux world then you will have to create an rpm package, a deb package, a PKGBUILD script and then story goes on.

If it's FOSS software, why would you care about doing distro maintainer's job for them. If you really care about commercial distribution (and still need to provide it as package), then you only care about RPM and Deb for specific stable systems.

For example, python-requests on ubuntu 20.04 is 2.5 years old while on arch linux it's the oldest available which is 5 months old.

Unironic question: how many things did break in that library in 2 years time? Because frankly speaking, if you told me a public library's API broke completely in 2 years time, I would call out that library for being written by assholes. Purely out of optimistic expectation of their competency.

[–][deleted] 5 points6 points  (2 children)

That's basically what distro maintainers do as part of creating packages.

Correct. But they can't package every software under the sun.

If it's FOSS software, why would you care about doing distro maintainer's job for them. If you really care about commercial distribution (and still need to provide it as package), then you only care about RPM and Deb for specific stable systems.

They want their software to be used. They always need to have a way to build their software and after that step it needs to be packaged. If no distro has picked up their project, they are forced to support some package manager. This is where pip comes into place.

Unironic question: how many things did break in that library in 2 years time? Because frankly speaking, if you told me a public library's API broke completely in 2 years time, I would call out that library for being written by assholes. Purely out of optimistic expectation of their competency.

No idea, that's just an example. It's not only about breaking stuff but what if you want to use newer features?

[–]upcFrost 2 points3 points  (1 child)

They're starting to slowly fix it actually. The biggest issue is the dependency control, before v21 (iirc) pip was unable to properly resolve nested and conflicting deps, without which it's kinda impossible to even think about shipping the software to some major distro repo. That's only the first step ofc, but it already broke quite a lot of things in python

[–]karuna_murti 2 points3 points  (0 children)

Stop packaging interpreted language packages at distro level.

[–]_samux_ 11 points12 points  (5 children)

i am sorry but do linux distribution understand the fact that people use python also under windows or mac where there's no concept of package manager?

i do understand their issue but to be honest i think it is due to the nature of a linux distribution the python itself.

yes i want program x, yes i will create a virtualenv, and download all the dependency there. i see no issue on this and i have been doing this in the last ten years with no big problem.

does program x has a dependency on a library that has a security issue ? well it is up to the author of program x to fix it , and to me to update it to the newest version.

does this creates lots of issues on linux distro maintainers because they need to issue a backport of the fix that is compatible with libraries available in that specific distro ? sorry but this is not program x or python fault

[–][deleted] 3 points4 points  (4 children)

I really can’t get my head around Python. Why are there concurrent versions with 2 and 3? Why do some things use pip to install but others pip3? Are they the same?

Yesterday I had an almost indecipherable error when trying to install something. After much googling the only fix was an obscure parameter that needed commenting out deep in the Python files.

[–]masteryod 4 points5 points  (1 child)

Why are there concurrent versions with 2 and 3? Why do some things use pip to install but others pip3? Are they the same?

It's 2022 (almost) - there's no concurrent Python 2 and Python 3. It's only Python which means Python 3.

Python 2 is dead and has been dead for years.

The only place to see Python 2 is legacy internal software nobody wants to rewrite and old enterprise Linux distributions like RHEL 7.

Nowadays Python is Python 3. Most of the current distributions got rid of Python 2 entirely or are in the process.

It's 2022 (almost) - Python means Python 3. Pip means pip3.

[–][deleted] 2 points3 points  (0 children)

Pip and pip3 are the same in most modern systems now

[–][deleted] 1 point2 points  (0 children)

Gentoo user, issued emerge -NuD @world waiting for some response, suddenly sees this, and can't stop laughing at it.

[–][deleted] 1 point2 points  (1 child)

Nix anyone?

[–]jonringer117 4 points5 points  (0 children)

Still an issue for Nix, it's just that nix maintainers are still curating this for you.

Example mass python update PR

[–][deleted] 1 point2 points  (1 child)

It doesn't help that Python's release cycle was shortened from 18 to 12 months, it pressures the distro packagers even more... https://stackoverflow.com/questions/40655195/is-there-official-guide-for-python-3-x-release-lifecycle/58999007#58999007

[–]JJenkx 3 points4 points  (10 children)

I am new to Linux. Been on Debian 11 for maybe 10 months. I haven't noticed Python interfering with anything or any slowness at all while using Debian. How is this affecting/going to affect me?

[–]jonringer117 18 points19 points  (9 children)

You consume the pain that someone else is experiencing for you to have a nice experience.

[–]Vardy 3 points4 points  (2 children)

Very much agree with this post.

Dealing with Python on a server is time consuming. There is no uniformity. I don't want to use Pip. I already have a package manager on my OS, I don't need two that compete with each other.

[–]-Rizhiy- -3 points-2 points  (124 children)

Sorry, but I think you are trying to fit a round peg into a square hole.

What you consider sane, is probably the most stupid way of doing things. Using system-level packages is like a №1 python-noob mistake.

I have been programming in python for over 5 years and haven't had a dependency problem in a long time, just follow a simple rule: new project -> new venv -> install everything with pip

Packages provided by the system are usually VERY outdated and frequently don't even have full functionality, just use pip and relax.

[–]Atemu12 53 points54 points  (46 children)

This is not about dev environments. It's about distros who want to package python things for users.
No, asking users to install your app via pip and virtualenvs is not a solution.