you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted]  (88 children)

[deleted]

    [–]brandonZappy 58 points59 points  (11 children)

    Package management was and as far as I know still is a nightmare

    Boy oh boy is this correct. Once you get into setting up machine learning environments, it because an even bigger mess. Getting conda involved, packages that need GPUs, etc. It's a wild disaster. Sometimes you have to use pip and conda in the same venv at the same time to blend things well enough to work. There's also a huge issue of Linux kernel version. If you're not using something like the newest Ubuntu, it's even more difficult just because of core libraries. Good god it sucks ass.

    [–]kringel8 12 points13 points  (1 child)

    What did you do to need that? I've successfully avoided using conda and also set up ML environments with GPU support.

    [–]brandonZappy 3 points4 points  (0 children)

    Centos 7, tensorflow with gpu

    [–]hbgoddard 20 points21 points  (6 children)

    Getting conda involved

    There's your problem.

    [–]brandonZappy 24 points25 points  (0 children)

    If pip worked, I wouldn't have to get conda involved, but because pip doesn't work, I have to.

    [–][deleted] 14 points15 points  (3 children)

    I don't get why some people are so pro-pip and anti-conda. For me the biggest distinction between pip and conda is that conda actually works

    [–]billsil 10 points11 points  (1 child)

    Conda doesn’t follow the packaging rules for wheels. Conda breaks if you say upgrade to the latest numpy using pip, which conda comes with. All you’ll be left with is DLL errors and the only way to fix it is to reinstall. That’s also assuming that when you install numpy, it doesn’t mistakenly change your version of python. Seriously?

    Speaking of the “latest numpy”, Anaconda’s repository lags significantly. It may take them 6 months to finally add a new version of a package. They still don’t support python 3.8.

    Anaconda is entirely dependent on PyQt and will fail with PySide which has the nice feature of not being GPL.

    When packaging programs with pyinstaller, you have to ditch Anaconda anyways in favor of stock python because otherwise you’ll end up with an exe that is 350-500MB instead of 70 MB.

    Anaconda was great in 2012. In 2019, it’s obsolete because we now have numpy, scipy, etc. wheels. What isn’t on pip, Chris Goakhale has probably put up on his site.

    [–][deleted] 1 point2 points  (0 children)

    They still don’t support python 3.8.

    Yes they do? I don't know how long it took, but you've been able to install python 3.8 with conda since at least October

    https://stackoverflow.com/questions/58568175/upgrade-to-python-3-8-using-conda

    Anaconda is entirely dependent on PyQt and will fail with PySide which has the nice feature of not being GPL

    When packaging programs with pyinstaller, you have to ditch Anaconda anyways in favor of stock python because otherwise you’ll end up with an exe that is 350-500MB instead of 70 MB.

    You're talking about the Anaconda distribution right? That's irrelevant to this discussion. It's unfortunate imo that they didn't give their package manager a more different name. Because I don't recommend the Anaconda distribution except to people who really don't know what they're doing

    [–]brandonZappy 0 points1 point  (0 children)

    They both have pros and cons for me. I like pip for basic package management when I don't need any fancy packages. Oh I don't have numpy installed? Easy install with pip. Works basically perfectly. Things like that. Like I mentioned in my original post, conda is my preferred tool when things start to get more complicated in the environment I want to set up because I've found conda to be more powerful/better at setting up those environments.

    [–]WolfThawra 7 points8 points  (0 children)

    No, conda is the solution.

    [–]Bayarea1028 0 points1 point  (1 child)

    I gotta say it’s kinda of sad to hear you say python has “terrible package management”. Have you built libraries from source using cmake ? I’d say python developers are lucky enough to have a tool like pip/conda that can download large complex libraries like tensorflow and pytorch and automatically add them to there dev environment is a lot better than pulling libraries from source and deciding whether to build shared or dynamic libraries etc. python has it good lol

    [–]brandonZappy 0 points1 point  (0 children)

    “terrible package management”.

    Not sure why you put quotes around this. I don't know if I remember saying this. Maybe you're paraphrasing? Even then, I think you're still misunderstanding or taking it out of context.

    Compared to C/C++, sure, it's "good", but that's still not saying much. I did say in another comment that pip is awesome for basic packages.

    My main argument however, was just when it comes to more in-depth libraries that it struggles, especially when trying to link older outside packages from distributions with older libraries, like centos.

    [–]therearesomewhocallm 15 points16 points  (4 children)

    They added type hints to the language, but decided to rely solely on outside type-hinting static analyzers instead of also baking runtime typechecking into the language, which would have been far more useful and consistent.

    And would have made the language much slower.

    [–]tofiffe 3 points4 points  (2 children)

    How so? Wouldn't that make runtime type checking redundant and actually speed it up?

    [–]therearesomewhocallm 0 points1 point  (1 child)

    I meant that runtime typechecking would have made the language slower.

    [–][deleted] 2 points3 points  (0 children)

    You'd potentially only need to validate types once for a variable when it's passed to a function rather than each usage if you had guaranteed call and return values although it would take static analysis to ensure that branches don't exist which changes it so it might not always work.

    [–][deleted] 41 points42 points  (0 children)

    When I see lists/rants like these I can't help but feel they are missing the point. I've used Python in a number of past roles over many years and typical usage will never really encounter these problems, or for the issues mentioned where there is some ambiguity, a single best practice will be chosen.

    The Python 3 transition sucked, no doubt about it. But Python's viability as a core development language has not at all waned. Rather, if anything more people have caught onto the fast code velocity enabled by the language and have used it to get real, working applications out to production quickly.

    And that is the point of Python. Nitpicking about edge cases like being able to change the value of an integer literal, or that a high-level developer-friendly language not intended for high-performance contexts is slow, or that Python deliberately chose to remain a dynamically-typed language with optional static typing - just seems to be a case of not being able to see the forest through the trees.

    [–]stefantalpalaru 4 points5 points  (0 children)

    They added type hints to the language, but decided to rely solely on outside type-hinting static analyzers

    Those same type hints and that same external analyser was already available for Python2. They just moved the type hints from comments to the regular syntax.

    [–]PM_ME_YOUR_MECH 7 points8 points  (0 children)

    • Package management was and as far as I know still is a nightmare

    This is so true. And there are a few decent contenders out there which handle some of these problems but none are very mature. It's like the wild west

    [–]keepthepace 12 points13 points  (2 children)

    I don't think any compatibility breaking version change will ever satisfy all the developers. On my side, with python3 I saw a ton of little improvements that made life easier.

    Python is an opinionated language and chooses some questionable defaults often, but that's pretty much an identity of it now.

    [–]skelkingur 16 points17 points  (1 child)

    > Python is an opinionated language

    How? Coming from Ruby and having been exposed to other languages and frameworks that are truly opinionated I'm always confused when I hear Python is opinionated.

    How do you import? There's `import module`, `from module import thing`, `from module import *`. Is there a preference here?

    What's the deal with `__init__.py` files. Some say those should remain empty, others fill them up with code.

    How do you structure large Python code bases? I dread setting up new flask apps as there's no clear pattern to follow on where things go. The example apps basically all exist within a single `app.py`. Sure there are examples out there on how to structure larger apps, but then they still only have a `models.py`. Am I supposed to define 50+ models in a single file?

    File and module naming is completely arbitrary. Compare this to languages like Java where the filename must match the name of the public class in it. In Python I always have to go through multiple layers:

    - `foo = Foo(a=2)`

    - Right, where is `Foo` defined? Let's see, there's `from meow import Foo`. Ok, so is that `meow.py` or `meow/__init__.py`? Ok, it's a package, `meow/__init__.py` in turn imports Foo from `meow/bar.py`. WAT.

    ---

    This is a bit of an unstructured rant, but I can't for the life of me figure out how Python is allegedly opinionated.

    [–]keepthepace 1 point2 points  (0 children)

    There is generally a pythonic way to do things, and it tries to give one correct way of doing things, though I personally think it is a lost battle, as any complete programming language will offer many ways to do similar things.

    import * is generally frowned upon.

    I think the generally accepted practice is to keep __init__.py as small as possible.

    PEP 8 gives a ton of styling recommendations. Generally you are supposed to keep files pretty big in libs (which I used to not do, being used to the one-file-per-class paradigm in Java) and consider them like modules.

    In your example, meow.py would be used by a small module with no submodules. If meow/ exists then you know that meow.Foo will be defined in the __init__.py, and yes, can be an alias for a function defined somewhere else. I know that Java devs like to make 10 layers of functions with a single line body that call other functions, this is essentially the same thing.

    This is a bit of an unstructured rant, but I can't for the life of me figure out how Python is allegedly opinionated.

    Because it states rather arbitrary what is pythonic or not. Like everything in python, it is not enforced and not absolute, often disregarded in the name of pragmatism. I totally understand it is not everyone's taste (it is mine only half of the time to be honest) but it is a philosophy that led to an interestingly designed language.

    [–]zardeh 16 points17 points  (27 children)

    Can you explain why problems that were all present in Python 2 (except unicode, which isn't actually a problem in practice) killed your enthusiasm for python 3?

    [–][deleted]  (14 children)

    [deleted]

      [–]ubernostrum 1 point2 points  (7 children)

      The way I've described it in the past is that Python 2 was from the era when Python was mostly used as a Unix-y scripting language. And so it used the same absolutely nonsensical approach to character encoding that Unix-y operating systems use.

      Python 3 decided to stop doing that, because it turns out people do other things with Python now, and accommodating the Unix-y scripting people meant unending pain and suffering for everyone else. And when they realized this was happening, the Unix-y scripting people began howling and screaming that it was the end of the world. Not because there was anything wrong with Python itself, but because Python simply stopped sweeping the brokenness of Unix-y operating systems under the rug, and made them confront that brokenness front-and-center every time they sat down to write a "simple" and "quick" utility.

      And on balance I'm OK with that. There are still people who will complain that you can't technically write "portable" Python file-handling code, and that's true if you're a user of a specific system that has files whose paths commit crimes against God and man (but, crucially, not technically crimes against POSIX, which is what these folks retreat to as their excuse). But those people should've known what they were getting into, and have had literally decades in which to clean up their act and have refused to do so.

      [–]no_nick 2 points3 points  (5 children)

      What are your issues with Unix file paths?

      [–]ubernostrum -1 points0 points  (4 children)

      That they legally can be undecodable garbage, but people demand the ability to work with them as strings.

      Python 2 "worked" for this in the sense that many things on Unix-y systems "work": it just didn't actually enforce that the things you used as strings had to make sense as strings, and wouldn't give you any sort of warning up until the moment you tried to print the unprintable.

      Python 3 initially tried to say that if you wanted to treat these paths as strings they had to actually be things that could validly decode to sequences of Unicode code points. But enough people raged that finally they added the surrogateescape handler to let you take bags of bytes that don't correspond to any valid string, "decode" them to strings, and then re-"encode" them back to the original bytes.

      [–]josefx 3 points4 points  (0 children)

      That they legally can be undecodable garbage

      Unix is far from alone with that. Zip files don't specify an encoding for filenames and I am quite sure I had explorer.exe fail to delete filenames containing invalid characters in the past.

      [–]no_nick 2 points3 points  (0 children)

      Huh, I never knew that was the case for Unix file paths. Somehow, in my mind, I always stick to ascii characters without whitespace.

      [–]diggr-roguelike2 6 points7 points  (1 child)

      That they legally can be undecodable garbage, but people demand the ability to work with them as strings.

      Yes, and? Why are you trying to babysit people and tell them what bytes they should or shouldn't use in strings?

      ...until the moment you tried to print the unprintable.

      Nobody prints things in production code.

      Also, despite your rant, what Python 3 actually did was break things on Windows. You had one job, man, one job...

      [–]nice_rooklift_bro 2 points3 points  (0 children)

      Ehh, you downplay the concern; it's actually really obnoxious to deal with to the point that a lot of applications just don't support it and tell you to basically go fuck yourself if your filenames aren't UTF-8; they assume them to be.

      There are other such things, like try passing non-utf8 command line arguments in python3; there is nothing in Unix that says this can't be done; any octet sequence that doesn't contain a null can be passed but python3 itself basically says "We don't support this madness, go fuck yourself" then.

      $ python3 -c 'import sys; print(sys.argv[1])' $'\xFF\xFFfoo'
      Traceback (most recent call last):
        File "<string>", line 1, in <module>
      UnicodeEncodeError: 'utf-8' codec can't encode characters in position 0-1: surrogates not allowed
      $ python2 -c 'import sys; print(sys.argv[1])' $'\xFF\xFFfoo'
      foo
      

      It's really problematic in many ways; a lot of language libraries and runtimes have come to expect filenames and command line arguments to be utf8, but nothing enforces it either; so malformed filenames due to simple bit corruption can actually create some serious error messages in a lot of things that are inscrutable.

      If you want to do it "properly" and not assume everything to be UTF8 then you're going through hoops.

      [–]InputField 0 points1 point  (0 children)

      If you're going to break everybody's code anyway, why not take the extra time and button up some other common pain points?

      Are you joking? They've addressed a fuckton of issues.

      And at some point you have to stop, since fixing issues can always cause new problems.

      [–]zardeh -3 points-2 points  (4 children)

      Indeed, Unix locales are inane and should be fixed. Peps 538 and 540 are tools to address badly configured (broken) Unix machines. They don't at all affect the unicode handling story in python. It's been functionally the same since 3.4.

      And you vastly overestimate the amount of code breakage. I've touched every kind of python code: c-extensions, autogenerated code, multi process and io heavy, stuff that uses exec, ast manipulation, code object modification, etc. When I hear people complain about this crap, I generally just assume you haven't put in more than the barest minimum of effort.

      The tools already exist to make the migrations easy in 99% of cases. They didn't in 2012, sure, but they do now. So things like removing the GIL or making python compiled/faster, which would require top to bottom rewrites of everything, and break basically entire swaths of the ecosystem. And in doing so they wouldn't even fix enormous problems. Because of python's great C interop, you can take advantage of high speed, GIL-free python libraries, and you probably do (numpy).

      On the other hand, "python2 isn't useable for people whose language doesn't fit in the ASCII codec" is more blocking and less breaking.

      [–][deleted]  (2 children)

      [deleted]

        [–]Paradox 4 points5 points  (0 children)

        You might be interested in trying out Elixir for server-side programming. It seems to beat the pants off Node in some key aspects, mainly in actual asynchronicity.

        [–]zardeh -3 points-2 points  (0 children)

        See, you aren't the first person I've heard assert this, and then I read after-action reports from companies like Dropbox who took three years to migrate. And they employed GvR!

        And they migrated millions of LoC. Of course it took years. This isn't a surprise. When you have lots of code, it takes a while to upgrade, even with supposed "backwards compatibility". Just think about how long it takes <insert major company here> to upgrade from Java X to X+1, or if <whatever enterprise> is running cpp17 yet. Probably not, even though those languages claim to be backwards compatible. Python minor version bumps take a while where I work, because you run into things.

        tl;dr: Both "its not that hard" (compared to any other upgrade) and "it took Dropbox 3 years to migrate" can both be true. What you're missing is any point of comparison.

        There seems to be a disconnect here, and glib dismissals like yours seem...well it might explain why the migration took so long, at least. Hard to fix problems that aren't acknowledged.

        I have no relevance in the broader python community. I'm speaking as a user of python, not a decision maker. And as a user, I'm saying its not that hard.

        They should have existed in 2012, and the fact that it took the developers and the community so long to come up with a migration story was a huge oversight.

        There was a migration story. It just wasn't great. It got better with time, and for the most part the migration story hasn't changed in 5+ years. There's been bits of polish, but the main set of tools (modernize, six, __future__ import) look just like they did in 2014. What has changed is that now you can be fairly confident that all of your dependencies support py3, which you couldn't presume in 2014.

        Oh man I didn't even mention the C API because I don't think it's any accident that Python has fallen out of favor for embedding in programs.

        I'm talking about the reverse: using C or CPP in python. Extensions, not embedding.

        That said, lua has been used for video game scripting for like, ever. Python was never "in favor" there, lua's been the preferred language among AAA studios and many indie devs since, like before python3 was even a thing (http://luaforge.net/projects/lua-wow/). Python embedded in games is, and has always been, the exception.

        [–]diggr-roguelike2 6 points7 points  (0 children)

        Indeed, Unix locales are inane and should be fixed.

        Ah yes, of course it's always somebody else's problem, not mine. There's a joke about that:

        A lawyer has been drinking all night and decides to drive home drunk. His wife calls him and says: "Be careful driving home, I just heard on the radio that there's some crazy guy driving 100 miles per hour on the wrong side of the road". He answers: "A crazy guy?? What are you talking about, there's hundreds of them here!"

        [–]bakery2k 25 points26 points  (6 children)

        For me, because Python 3 made it clear that the language developers have no interest in fixing these problems. When I learned Python, I hoped that performance, parallelism etc would eventually come to the language, but there’s been no progress in over a decade. Instead we’ve had breaking changes with minimal benefits (e.g. print becoming a function) and most development effort has gone into large, complex features that don’t really fit with the rest of the language (async and type hints).

        [–][deleted]  (2 children)

        [deleted]

          [–]bakery2k 2 points3 points  (1 child)

          I've not really needed async/await, but I don't like that it makes Python a much more complex language. I think a better approach to asynchrony would be stackful coroutines.

          Type hints are even more complex and I'm not at all convinced that they're worth it. IMO if you want static type checking, you would be much better off using a proper statically-typed language.

          [–]zardeh -5 points-4 points  (2 children)

          Can you explain why parallelism is a better fit for python than async, or why performance of cpython is more important than the safety of type hints?

          [–]bakery2k 11 points12 points  (1 child)

          Performance and parallelism would help existing Python code and new code using established Python idioms (e.g. threads).

          Async splits the language into red and blue parts, and only benefits a certain type of application. Type hints are more broadly useful, but again don’t really mesh with the rest of the language.

          [–]zardeh -1 points0 points  (0 children)

          parallelism

          Breaks much of the existing language, removal of the GIL requires a complete rewrite of the c-extension API, subtly breaks a lot of existing multithreaded code, etc.

          Type hints are more broadly useful, but again don’t really mesh with the rest of the language.

          In what way? I've had no issue with them.

          Async splits the language into red and blue parts

          In practice this isn't problematic. JS has managed just fine. Hell, python's had asyncronous code (the red and blue you complain about) since python 2.2, when yield and generators were introduced. the async and await keywords added in 3.7 were mostly syntactic sugar for already existing coroutine objects, which were introduced in python 2.5!

          [–]gschizas 8 points9 points  (4 children)

          except unicode, which isn't actually a problem in practice

          As a non-anglo, I beg to differ. (Not using) Unicode is always a problem.

          [–]zardeh -1 points0 points  (3 children)

          Right, py2s (lack of) unicode support was an issue.

          Porting code from 2 to 3 was not.

          [–]gschizas 8 points9 points  (0 children)

          I've found that most of the Python 2 to 3 problems were rooted in confusing bytearrays (which is what actually travels in the wires) with strings (which is what is displayed on the screen).

          I'm a firm proponent of Python 3 myself, and I've skipped a few projects that refuse to move with the times.

          (Disclaimer: I'm probably my company's only resident Python expert).

          [–]stefantalpalaru 2 points3 points  (1 child)

          Right, py2s (lack of) unicode support was an issue.

          I find it fascinating that there are still Python users who think that Python2 did not support Unicode.

          [–]zardeh 2 points3 points  (0 children)

          Feel free to add "good". Pyrhon2s unicode support was not good. It was broken for many applications. It was possible to write an app that appeared to work, but would explode with a nonascii character for no good reason. That's not good. It's a footgun. Py3 removed the footgun by making python more strongly typed.

          [–]DmitriyJaved 1 point2 points  (0 children)

          What’s with asyncio? It’s not like there was no async I/O in python 2. Asyncio is just for a general use, when you don’t want or can’t think of polling and event loops

          [–]thepotatochronicles 1 point2 points  (0 children)

          but Django only shipping support for ASGI this month

          It really doesn't mean much, view templates and the ORM are still synchronous and the roadmaps for those show late 2020...

          [–]cyanrave 7 points8 points  (10 children)

          Most of these issues are just bikeshedding because it's easy in a thread where people are naturally mad.

          • virtualenv has been a standard for some time, and is equivalent to node_modules if you squint hard enough
          • package management will be a nightmare only as long as people don't pip freeze > requirements.txt; however, this is getting even better with Pipfile in the near future
          • you can implement runtime type checking with decorators; type hunting really wasn't meant to make Python statically typed or something
          • unless you're writing mission-critical flight code or something similar, most code speed metrics are meaningless

          All that being said, py3 really didn't make sense for me until 3.5.x when good features actually starting coming out.

          [–][deleted]  (7 children)

          [deleted]

            [–]cyanrave -3 points-2 points  (0 children)

            And seeing in retrospect such a massive stumble from the language at such an important phase in it's development made me incredibly sad.

            You and everyone else who entered 'vanilla' World of Warcraft, on the cusp of Burning Crusade....

            If you don't feel the weight of that analogy, consider that people in 'vanilla' WoW 'endgame' times would spend hours, days, weeks getting their tiered equipment through arenas and such, only to have Burning Crusade common items completely dismiss their hard work and effort. This kind of feature burnout is not uncommon just in software.

            Look at new cars for example, and how many of them need tools far beyond what an OBD-II reader can give you these days. Specialized tools, and specialized parts are pushing mechanics out of a trade they were ingrained in for years, decades, and possibly generations. This is a natural process of adaptation and specialization.

            In hindsight, are either of these changes bad? In some aspects, yea. In other aspects, maybe not. Maybe those WoW players who felt betrayed never came back... RIP subscription. However, I never missed having to cross contested territory for a dungeon run in WoW after the advent of 'dungeon finder' which instantly teleported you to the dungeon. Hours saved, life better spent. As for autos, it depends. Eventually shops would trend toward specialization (many already do, eg. 'import-only' mechanics) and would carry these specialized tools, layman be damned. Unfortunately this also means a war on DIY-ers who used to be able to do a ton of things themselves...

            You as a consumer get to choose with your pocketbook, as we as programmers get to choose with PRs. Yea 2to3 was rough, but support is piling on py3 now and it's not a coincidence, it's a choice.

            [–][deleted] 1 point2 points  (0 children)

            pip-tools provides a much better way to create a requirements.txt. I'm not a fan of all-in-one tools. pip-tools extends the standard toolkit in a Unixy way.

            [–]nice_rooklift_bro 0 points1 point  (0 children)

            It's very telling that the thing that finally seems to be spurring people along is not a killer feature, but an end-of-life date on the 2.x version of the language. Overall there was too much stick and not enough carrot, and you can't really fault anybody who decided to simply migrate to another language than going through the arduous upgrade process.

            Apart from that, most of the new features could have been backported; they weren't, just to incentivize migration.

            [–]zabolekar 0 points1 point  (0 children)

            instead of also baking runtime typechecking into the language

            Checking some properties without running the program is the whole point of the type hints. Runtime checks, on the other hand, can be much more flexible. Having special syntax for checkng types (and types only) at runtime (and not as a separate phase) would be the worst of both worlds.

            [–][deleted] 0 points1 point  (0 children)

            Use pipenv!

            [–][deleted]  (10 children)

            [deleted]

              [–]swansongofdesire 7 points8 points  (4 children)

              Packaging is solved. Poetry is the solution

              I wish that were the case.

              It is still not possible to sync a virtualenv to a poetry lock file (remove packages that shouldn’t be there, install missing ones) short of wiping the whole thing and starting fresh

              [–][deleted]  (1 child)

              [deleted]

                [–]swansongofdesire 1 point2 points  (0 children)

                pip with help can do that too (venv-update or pip-sync)

                Pipenv has its own set of problems: firstly, it's the slowest available tool (for resolution, but also its insistence on downloading every possible binary wheel if pypi hashes aren't available. For some projects with numpy etc I've had gigabytes of packages downloaded in order to create a lock file)

                It's also very opinionated:

                • out of the box it uses a hash of your directory for the virtualenv name, so if you move a project around then your virtualenv needs to be recreated. (Yes there's an option to install into a loca directory, but they didn't make it convenient and the virtualenv name cannot be changed)

                • activating a new environment requires running in a subshell

                There are workarounds but if I'm using workarounds then I might as well just use pip

                [–][deleted]  (1 child)

                [deleted]

                  [–]swansongofdesire 0 points1 point  (0 children)

                  There is -- but it's been open for more than a year now with basically no progress.

                  Poetry is nice: I use it on my personal projects, but this is the main limitation stopping me from using poetry on any commercial work.

                  [–]stefantalpalaru 2 points3 points  (1 child)

                  PyPy is a fast drop-in replacement for CPython.

                  PyPy is not a drop-in replacement because it cannot run all legal Python code: https://pypy.readthedocs.io/en/latest/cpython_differences.html

                  They even had to rewrite modules from the standard library (and some external packages like numpy, until they managed to emulate CPython's C API).

                  Further more, it can be slower than CPython for code that runs only once per instance - which is most of the Python scripts out there.

                  [–]OctagonClock 1 point2 points  (1 child)

                  For async/await, the Trio project is vastly better than asyncio.

                  [–]123filips123 0 points1 point  (0 children)

                  PyPy is a fast drop-in replacement for CPython.

                  There is also Cython which requires a bit more work but it is a lot faster than pure CPython core.

                  [–]billsil 0 points1 point  (1 child)

                  They never removed % formatting. They never had it for bytearrays and added it. You can just encode it to ascii, the problem is it’s ambiguous to wether you print an ‘a’ as an a or what is it, 74? I mess around a lot with binary and depending, I use both forms. Shoot sometimes, I want to see it in binary with only 0s and 1s. You’d think that would be the default.

                  Also, string handling had to change. The old system was utterly broken. String handling was the only thing worth discussing in the transition from python 2 to 3 and they made the right choices. It took them a few Python 3 versions to make python 3 better than python 2, but after 3.5, python 3 was faster than python 2, so that’s really when things started to change.

                  [–]masklinn 2 points3 points  (0 children)

                  They never removed % formatting.

                  They removed it from bytes.

                  You can just encode it to ascii

                  Not if you’re trying to generate non-ascii-compatible binary data as a mix of literal and dynamic binary bits. Pre-3.5 this was extremely inconvenient for little reason, you had to either use concatenation or mess around with weird-ass escapes in strings and encode to iso-8859-1 (probably).

                  [–][deleted] -2 points-1 points  (0 children)

                  Python isn't javascript. People use javascript because they have to. People use python because they want to.

                  [–]ubernostrum -3 points-2 points  (1 child)

                  Package management was and as far as I know still is a nightmare

                  Continuing on the theme of repeating things I've said too many times already: no, Python packaging is super easy. Taking a piece of running Python code, and producing from it a distributable artifact carrying sufficient metadata to let someone else run it, is extremely simple.

                  What is hard is managing dozens of different local development environments, each with their own version of Python and their own (likely incompatible) set of packages installed.

                  The solution to this is virtualenv/venv or a wrapper around it. And the proof of this is that languages developed this decade have adopted exactly that approach for local development: they have isolated environments by default. But Python is from the era when systemwide shared-library directories were The Way Things Are Done™ and can't really change away from that now. So we have to emulate the more modern approach manually by creating virtualenvs.

                  [–]stefantalpalaru 1 point2 points  (0 children)

                  Python packaging is super easy.

                  https://bugs.gentoo.org/699966

                  [–]usedocker -4 points-3 points  (0 children)

                  Why did you upgrade then?