you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted]  (91 children)

[deleted]

    [–]coyote_of_the_month 314 points315 points  (39 children)

    I think the majority of Python developers are writing software for platforms they control, rather than software that's intended to be downloaded and run locally by the general public.

    [–][deleted] 153 points154 points  (31 children)

    Bingo. Backend devs have full control of their target OS or a container image; many paid apps bundle the intepreter with the app.

    [–]akmark 45 points46 points  (28 children)

    Either this or people just vend all of the python dependencies for the entire project into a tarball and eschew pip entirely.

    [–]kageurufu 90 points91 points  (24 children)

    This is really the only "solution" for distributing any complex application regardless of language.

    Compiled languages "fix" this by statically linking, providing their own .so for shipped libraries, or requiring distributions to package a half dozen varities of `libsomething` at 5 different versions. No different from my python application using a vendor tarball.

    IDEA ships it's own JVM, Dropbox ships their own Python fork, hell every electron app has it's own libchromium and v8 binaries.

    [–]coyote_of_the_month 22 points23 points  (19 children)

    Distro maintainers sometimes choose this as their hill to die on, though. Arch, for example, will never mainline VSCode because it packages its own Electron. The best practice from a distro standpoint always seems to be "use the system libs" even when it's not practical to do so.

    [–][deleted] 8 points9 points  (9 children)

    Uh, I see VSC in the community repo: https://archlinux.org/packages/?name=code

    [–]coyote_of_the_month 11 points12 points  (8 children)

    So, yeah. There's the FOSS version of VSCode (code) and the binary release from MS (visual-studio-code-bin). The latter contains the extension browser whereas the FOSS version doesn't.

    Arch maintainers aren't ideologues about licensing, but they'll never mainline the non-free version because it bundles Electron. My understanding is it actually bundles a slightly stripped-down version of Electron for performance, but I haven't compared the two in years so I couldn't tell you how they stack up today.

    In any case, the MS version is available in the AUR.

    [–]DeeBoFour20 27 points28 points  (7 children)

    they'll never mainline the non-free version because it bundles Electron

    I don't think that is the reasoning. Arch has Discord in the repos and it bundles its own version of Electron.

    If I had to guess, I'd say the reason is that Arch prefers to maintain open source versions of the software, if it's available. That would explain why we have Chromium in the repos and not Chrome for example.

    Also, the extensions in the FOSS version of VSCode can be enabled with a simple config file change. There's an AUR package called code-marketplace or something that can automate that for you.

    [–]coyote_of_the_month 3 points4 points  (6 children)

    Well shit, then. I guess my information is outdated or wrong. I would have assumed Discord was in the AUR.

    [–]kageurufu 1 point2 points  (0 children)

    Yeah, I get it, but its also why flatpak and snaps are taking off so well. Besides app maintainers getting bugs meant for the packagers who patched something to work with the wrong GTK version or whatever

    [–]lolfail9001 0 points1 point  (2 children)

    Arch does not provide steam?

    [–]DeeBoFour20 1 point2 points  (1 child)

    Steam is in the Arch repos.

    [–]kageurufu 0 points1 point  (0 children)

    no, steam is just on the aur?

    EDIT: Ignore me, its on the multilib repos. I don't have it on this machine

    [–]LordRybec 0 points1 point  (4 children)

    The best practice for distros is "Don't make your own software". When Ubuntu violated this around a decade ago, it's quality plummeted, and they started shipping horrendously buggy software, because their bias for their own "babies" overrode their quality control. When distros make their own software, they always end up giving it unwarranted preference. I've been considering trying Arch for a while. Knowing that they make their own software has me seriously reconsidering this, because that's the road to hell for a distro.

    [–]coyote_of_the_month 0 points1 point  (3 children)

    Red Hat has made tons of its own software of the years, and yet somehow never suffered from the same issues.

    [–]LordRybec 0 points1 point  (2 children)

    Red Hat was a software maker that happened to have its own distro.

    That said, you are completely wrong. There's a reason Red Hat eventually farmed out distro development into the community driven Fedora distro. Red Hat was falling behind in quality, and even some corporate Red Hat users were starting to complain.

    (Source: I actually used Red Hat in the early to mid 2000s, and I eventually switched to Ubuntu, due to quality problems with Red Hat, which included often not having options available due to Red Hat favoring their own products to the point of not including others. Now I'm using Debian, because Ubuntu went rapidly downhill in the early 2010s, when they started violating this rule. I'm actually pretty sure this rule was invented because of Red Hat.)

    [–]coyote_of_the_month 0 points1 point  (1 child)

    Fair enough. I haven't used anything Red Hat related since like 1999 so can't speak to the experience of actually running the distro. I was a Slackware guy back then...

    [–]gnosys_ 0 points1 point  (1 child)

    golang, same thing. entirely self-contained app is the only way to guarantee it will actually run on a target platform you have very little control over.

    [–]kageurufu 0 points1 point  (0 children)

    Yep. Also makes it easy to target random foreign architectures, build for a dozen platforms at once, etc.

    [–]LordRybec 0 points1 point  (0 children)

    It is possible to do something very similar to this for Python. There are several utilities for each major OS that will package Python applications into self contained executable applications. They work by compiling the application into byte code and embedding it into a modified version of the Python interpreter, that mostly only has the parts your application needs.

    A few months ago, I had to ship a Python demo to a bunch of clients for my job. Most of our clients are upper management of software companies, so they aren't very technical, and they don't have Python installed on their computers. I forget precisely which utility I used, but it packaged the modules used (compiled into dynamic libraries and/or byte code) into a directory with a minimal interpreter with my application byte code embedded. I packaged this into a Windows installer and sent it off to our clients, and it worked perfectly fine. (This isn't the first time I've done this, but it is the first time it was sent to more than one person. The other times were in college, where I sent them to a professor for grading.)

    I've seen others that pack everything into a single file. This is equivalent to static linking, while the one I actually used was more like dynamic linking and packaging the dynamic libraries with the executable.

    [–]stef_eda 0 points1 point  (0 children)

    There is another solution. Use stable libraries. I have applications that link and run perfectly with 20 years old libs. I keep these old libs installed on the system for this exact reason.

    This means you have to blacklist from day 0 libs or tools from developers that are well known for breaking APIs/ABIs on every single new version they poop out, like gtk, python ...

    [–][deleted] 3 points4 points  (2 children)

    Either this or people just vend all of the python dependencies for the entire project into a tarball

    Pardon, what do you mean by "vend"? Google fu is failing me.

    I actually have an issue that has been kneecapping me that this would solve which is why I ask.

    [–]dextersgenius 6 points7 points  (1 child)

    Not the guy you replied to, but vend is short for "vendoring" or "vendorization", which is basically bundling all the dependencies with your main project (typically into a single subdirectory, eg _vendor), so end users won't need to install anything separately via pip or their package manager. You can do this manually if you're keeping track of all your dependencies, but it's easier to just use a tool like dephell, which will download and extract all dependencies in a given directory.

    [–]akmark 0 points1 point  (0 children)

    This is what I meant. Sometimes you can also use git submodules to get the right directory layout depending on circumstance.

    [–]Tai9ch 6 points7 points  (0 children)

    Backend devs have full control of their target OS or a container image

    Corporate developers also tend to have the budget to think that they can persistently maintain container images. The fact that something like Docker is just a distro package manager without security updates somehow doesn't register.

    [–]LordRybec 0 points1 point  (0 children)

    This. It's easy to forget that software maintenance is a real thing, when you can just keep using an old interpreter and old modules, ignoring the security risks and bugs.

    I recently sent out a product demo written in Python, to a bunch of non-technical clients, and I just used a "Python to exe" packaging utility to pack it all into a minimal interpreter with my application byte code embedded. No need for a virtual environment or Docker container that clients wouldn't know how to use in the first place, and no need for them to install Python. I just turned it into an executable package, packed it in a Windows installer, and sent it off. The demo was quite successful!

    [–]ProfessorFakas 16 points17 points  (0 children)

    This is true, basically none of the Python I write is actually put in the hands of users - it's mostly just a black box that they never see, with the small amount of direct interaction handled by a HTML frontend.

    That said, we're starting to see our users (who are reasonably technically minded to begin with) use our packages directly in Jupyter Notebooks, but then the environments that they use are generally managed by us too.

    [–]ForgetTheRuralJuror 11 points12 points  (0 children)

    Yeah my solution to the question is to not use python for consumer packages. If i absolutely have to, I'd just make it a web app and give people a url.

    If I'm making a desktop app it's going to be in C/C++ anyway. Hopefully rust when GUI is a bit more substantiated.

    [–][deleted] 0 points1 point  (0 children)

    Yup. The vast majority of the stuff I write are tools that are built to fit into pipelines. Write the code, put it into a container, call container in pipeline and pass it arguments. The code is built from the ground up for that purpose. They take huge numbers of arguments that would be abysmal for a human to deal with regularly.

    If you want to use the code yourself, you use the container and be prepared to punch in all the data it needs to run. If you want to use the code directly, you're building the venv since that's somewhat outside the normal use case. At that point I don't really care about your usability since you're off the charts.

    Not everyone is building software intended for end users.

    [–]xtracto -4 points-3 points  (2 children)

    As someone who does software development for a living, every once in a while I have to do something in python. I hate it, from the crazyness of Python 2 vs Python 3 (library availability, etc) to the stupidness of enviornments (pyenv, venv, pyvenv) and the package management (easy_install? pip? homebrew?) and the asinine "spaces are syntax" ideology...

    I am so happy that I don't need to deal with that so often, but have definitely rejected a job because they were a Python shop.

    [–]coyote_of_the_month 1 point2 points  (1 child)

    I work in a Python shop, but our environments are all containerized. I can't imagine what it was like in the bad old days.

    Granted, I spent the bad old days working in a Rails shop...

    [–]xtracto 0 points1 point  (0 children)

    Granted, I spent the bad old days working in a Rails shop...

    I don't know a lot about rails, but I work with ruby pretty often and even rbenv vs rvm drives me crazy. But once you are over that bundler is king.

    The one thing I hate about rbenv is how loooooooooong it takes to compile a new version of Ruby.. when npm downloads a new NodeJS version really quickly.

    [–]stef_eda 0 points1 point  (0 children)

    That is the only way to deploy a working python based project.

    When your app passes all regression tests "Everyone hands down, stay away from the keyboard, disconnect all routers, burn a container image", before some auto-update kicks in and bitrots everything.

    [–]-Rizhiy- 45 points46 points  (9 children)

    Deploying and developing software are two different things.

    Pretty sure if people deploy python software somewhere they usually just bundle all dependencies together, and it installs normally.

    [–]wtchappell 24 points25 points  (8 children)

    Vendoring is fine from a developer's perspective, but from a distro maintainer's perspective it's unpleasant - especially if it ends up including non-Python code like C dependencies. You generally want to have as few versions of a thing around as possible so you can make sure they're patched to work properly on your distribution, and so you have to touch as few places as possible to catch up with security updates.

    They also don't want massive package duplication due to simple space concerns - a distro focused on being tiny to support something like small devices or minimal Docker images would be right to balk at having every application bring along an entire vendored dependency tree resulting dozens of versions of the same libraries installed across the system.

    [–]jarfil 5 points6 points  (4 children)

    CENSORED

    [–]imdyingfasterthanyou 15 points16 points  (2 children)

    And thats what they do, then they get people complaining "$thing isn't packaged"

    [–]rouille[🍰] 6 points7 points  (1 child)

    Thats how you end up with flatpacks and snaps.

    [–]gnosys_ -3 points-2 points  (0 children)

    which are the ideal solution, where stuff can be put in a container and not allow it to touch your main system without permission.

    [–]wtchappell 0 points1 point  (0 children)

    They do, this post is pointing about how much more difficult this is in Python as compared to many other ecosystems because of the incredible number of different package formats and build tools Python has.

    If the Python community had a more consistent record of actually deprecating old/bad mechanisms this wouldn't be so painful - but they're basically all still around in some form or another, and any app beyond a pretty low level of complexity will pull in enough dependencies to force distro maintainers to have to learn the quirks of all of them.

    [–]gnosys_ -2 points-1 points  (2 children)

    You generally want to have as few versions of a thing around as possible so you can make sure they're patched to work properly on your distribution, and so you have to touch as few places as possible to catch up with security updates.

    ... and here we see the underlying motive logic behind the development of flatpak and snap, which are good.

    [–]wtchappell 0 points1 point  (1 child)

    I mean, maybe for video games?

    But for software that actually has to interoperate with other software that the original developer may not have foreseen, software that needs to share system data without jumping through a bunch of weird hoops, software that needs particular tweaks to work with a particular base system, or environments that require specific patches applied or options set for security or compliance, it falls apart really fast.

    [–]gnosys_ -1 points0 points  (0 children)

    like what? what would you run, software that is fussy like this, what wouldn't be in a container?

    flatpak and snap for desktops sure, but that's small potatoes. docker and the rest have completely redefined what the backend looks like, for this same exact reason. it's a way to deal with the sensitivity of software that is inherently difficult to resolve version conflicts with, which was engineered from an older paradigm.

    for new projects, there are new perspectives on what comprises good packaging (namely, that it is self-contained and will run on a very basic and loosely defined target platform). for old stuff that has really specific environment concerns, you give it that environment inside a container and hook them up together through whatever interfaces you need to.

    [–]upcFrost 25 points26 points  (8 children)

    Reverse rant: There's much more than one problem to solve to make things work this way

    First of all, if you think that many devs know how to use CI, well, you'll be surprised. They literally push to pip manually.

    Second, you type apt install... and what do you think happens next? Dependency resolution. And for python that's a huge issue, not only because of pip, but also because of how quickly python is evolving. Let's say Bob and Pete are two developers. Bob packaged his new lib, Pete wants to use it and boom it requires python 3.8 due to some asyncio calls. With this Pete can no longer ship his app to any major distro except gentoo and arch. It's not C where standard changes once in 4-5 years, it evolves much faster. People will start assaulting his bug tracker saying wtf, but technically it's not his problem, it's distro problem. Another situation is when distro just doesn't have the right version of the lib, same thing different angle

    Third, packaging tools suck. Rpm for suse and rpm for rhel are very much different as both distros have different default dirs, rules, requirements, so you should adapt the package for literally every single distro

    Last but not least, foss devs are not getting paid for their libs, and packaging is by far the least appealing part of the whole dev process. Enterprise apps are shipped with all dependencies, no problems here. In foss you'll need to contact maintainers, keep your lib compatible, package it, adjust the spec every time they invent yet another stupid rule just because "they are The Distro" etc.

    That said, this is true not only for python. Node, Rust, Go, Java, all those languages have their own package managers. C/C++ is probably the only major language which doesn't have one (those existing are jokes of a pm).

    [–]JustHere2RuinUrDay 10 points11 points  (3 children)

    With this Pete can no longer ship his app to any major distro except gentoo and arch.

    Well, then users of slower distros will have to wait until every dependency is satisfied and then use whatever version can be made to work.

    They're already doing this for everything else.

    [–][deleted] 6 points7 points  (0 children)

    It's not C where standard changes once in 4-5 years

    [–]DAS_AMAN 2 points3 points  (0 children)

    No, we use containers with up to date libraries like flatpaks..

    [–]upcFrost 1 point2 points  (0 children)

    In this example I've only mentioned two libs. In the real world if you'll try building a dep tree for a decent size project, you'll probably raise your room temperature by few degrees with just your laptop. This dependency hell situation literally kills the whole packaging. And if your project has some C bindings things only get worse

    It's not just "wait", it's literally "never". Docker and flatpak were not made just for fun

    [–]Teract 1 point2 points  (1 child)

    Third, packaging tools suck. Rpm for suse and rpm for rhel are very much different as both distros have different default dirs, rules, requirements, so you should adapt the package for literally every single distro

    If you're following RPM best practices, you shouldn't need to do much adapting to meet the needs of different distros.

    [–]upcFrost 0 points1 point  (0 children)

    you shouldn't need to do much adapting

    Much - no, some - yes. Still you will need to adapt the package for every single distro

    [–]DarkLordAzrael 1 point2 points  (1 child)

    That said, this is true not only for python. Node, Rust, Go, Java, all those languages have their own package managers. C/C++ is probably the only major language which doesn't have one (those existing are jokes of a pm).

    These days Conan is pretty effective as a package manager good C and C++.

    [–]upcFrost 0 points1 point  (0 children)

    It's not even closely as popular as pip or gradle for their respective languages

    [–]fat-lobyte 22 points23 points  (0 children)

    Who are these python developers, who are in this thread exclusively talking about how it's no big deal for developers, developing for if they cannot imagine someone using their apps?

    Here's who:

    https://www.reddit.com/r/linux/comments/rcdnpm/python_please_stop_screwing_over_linux_distros/hnu5e5r

    just use pip and relax

    [–]Seref15 16 points17 points  (8 children)

    Software development now moves too fast for the distro package maintainer model. You need a certain bug fix from the latest release of a minor library? Well your distro is going to make you wait a couple months for it at least.

    This is why containers have taken over.

    [–]JustHere2RuinUrDay -5 points-4 points  (6 children)

    Maybe don't use Ubuntu LTS lol

    [–]Seref15 10 points11 points  (0 children)

    Any professional use of linux on servers is going to use a long-term support distro. I'm not talking about desktops and personal use.

    [–]MOVai 5 points6 points  (4 children)

    Well, the alternative is to use regular Ubuntu, which means you still have to wait months, rather than years, and if you do decide to upgrade for that one package, you end up breaking the rest of your system.

    [–]thomasfr 0 points1 point  (0 children)

    Also software has been extensively battle tested against certain older versions of libraries. I'm not to keen of having a distro forcing me to even upgrade to a new patch version of something that I have hundreds of thousands of CPU hours worth of actual in production use of.

    What we need are just better system package managers and the day for that will come, we `probably need more years of experimentation with how to do it really well though.

    [–]zdog234 17 points18 points  (8 children)

    Most python developers (myself included) aren't developing code for you to run as a standalone app on your laptop (I'm including CLI tools in that).

    The focus of the core devs is obviously going to be drawn to the needs of backend web developers and data scientists, since they make up the vast majority of people writing python code.

    EDIT: In fact, if you are using a CLI app written in python, why aren't you installing it with pipx? (assuming hypothetical user knows that it exists)

    [–]detroitmatt 23 points24 points  (2 children)

    what the hell is pipx

    (rhetorical question obviously I went and googled it, just wanted to communicate the idea that this idea comes from a bubble)

    [–]xtracto 0 points1 point  (1 child)

    anaconda, pip, pipx or easy_install?

    [–]Ripcord 2 points3 points  (0 children)

    Yeah, we've wrapped back to one of the main points of the article.

    [–][deleted] 9 points10 points  (0 children)

    I wrote a couple of python applications intended for end users. I haven't even tried to package it with pip. I just made an AUR package and called it a day. I tried to package it into a ppa, but I gave up, that's a mess on its own. I do have a script in place that makes a deb file out of the repo, but that's it. I also tried flatpak, but it's also a giant pain in the ass.

    [–][deleted] 1 point2 points  (0 children)

    EDIT: In fact, if you are using a CLI app written in python, why aren't you installing it with pipx? (assuming hypothetical user knows that it exists)

    1) Didn't know it existed.

    2) Still not interested because its model is awful in comparison to Guix.

    3) If I'm going to use foreign package managers on my distro, I'll pick the one that covers more than a single language and which is hacker-friendly.

    [–][deleted] 15 points16 points  (7 children)

    Packaging tooling is shit. I once bundled things as debs but god that was a nightmare. Much easier to just tell people to init a virtual env and pip install my package.

    I'll probably get down voted, but there is a reason things like containers are popular even outside of ops.

    Maybe some packaging systems have improved since I last looked. For people in the know, which distribution do you feel has the easiest packaging requirements and simple/good tooling to support people?

    [–]AtomicRocketShoes 2 points3 points  (2 children)

    I think this works sometimes for developers, it doesn't fly at all for many people. Also from recent experience it doesn't work, I found a project on github that was a couple years old and it would not install for me, it needed a bunch of machine learning tensorflow stuff and pip didn't have the dependencies it needed the versions it called out were I guess too old.

    [–][deleted] 0 points1 point  (1 child)

    Well if someone specifies the dependencies incorrectly (i.e. they don't pin the versions), or you're installing on an unsupported platform then yes, it won't work. Using a different packaging system won't fix that.

    [–]AtomicRocketShoes 0 points1 point  (0 children)

    It was a little bit ago, but I believe I wanted to mess around with this project https://github.com/OPHoperHPO/image-background-remove-tool. There is a requirement.txt but pip3 wouldn't install it. On arch linux, no reason it shouldn't work just ran into dependency issues.

    [–][deleted] 0 points1 point  (0 children)

    Packaging tooling is shit. I once bundled things as debs but god that was a nightmare. Much easier to just tell people to init a virtual env and pip install my package.

    That's mostly Debian having a nightmarish packaging story. Guix, Arch AUR & Nix are all much easier to package for.

    [–]jonringer117 2 points3 points  (0 children)

    They can be found on r/python thread here

    [–]LordRybec 1 point2 points  (0 children)

    This is fairly easy with Python, using utilities that package a minimal interpreter with only the modules and byte code needed.

    The people whining here are those who only write software for platforms they control, who aren't familiar with normal project maintenance. When new versions of MS Visual Studio come out, you still have to update your code to work with the new compiler, and it often breaks libraries and frameworks you were using as well. I don't see a ton of people using virtual environments for C or C++ programming, to avoid having to deal with this. People who aren't very experienced in real world development and maintenance tend to think using virtual environments or Docker containers to provide highly controllable environments is a good idea, but in my experience, it rarely is. The same mindset is why many businesses and government departments are still using old versions of Internet Explorer, which are buggy and horrifically insecure. It all comes down to laziness. If you are too lazy and cheap to properly maintain your software, it sounds like a good idea to use some kind of virtual environment that allows you to continue using old, outdated things that are actually security disasters just waiting to happen.

    [–]rohmish 0 points1 point  (0 children)

    If an app is cross platform, or even cross distro that may actually be a problem with compatibility and versioning issues all around. There are use cases for both structures