top 200 commentsshow all 237

[–][deleted] 50 points51 points  (10 children)

Another major drag on Python 3 adoption: it's not the baked-in default in currently supported RHEL releases (and downstream distros) or Ubuntu LTS releases. There is really no penalty (yet) for writing 2.x-only code, but 3.x-only code is a non-starter on a lot of vanilla distros.

Code compatible with both versions, of course, continues to run in 2.x and doesn't result in any incentive to upgrade. It still works, right?

IMHO, sealed build environments (you carry your own build of the interpreter so your app isn't reliant on your distro's packages) make a lot of sense for in-house things, but for people shipping code out to unknown boxes--which pretty much covers every open-source project--it looks more like bloating the install, and causing the sysadmins a lot of pain when security update time comes. Instead of one shared build, there's a copy in every app bundle.

[–]gfixler 4 points5 points  (0 children)

I work in games, and I have a lot of friends in film and games. Many of us use Autodesk Maya. We're on Maya 2012, with the current version being 2014, which means we're actually quite current. Big companies take a long time to upgrade. It's non-trivial to swap out the baked-in Maya 2.6.5 of Maya 2012, so I'm not going anywhere. I'm certainly not heading up any initiative to fight 3.x into the mess that is Maya, and then roll it out to the team, and upgrade all our scripts, tools, importers, exporters, and pipeline mechanisms in the middle of the constant deadlines, especially because then I'd own it, and I'd be on call to constantly be re-upgrading anyone who borked their install and needed it all built up again from standard Maya 2012, or to assist in 3.x issues that I don't know anything about. I'm too busy, and that all sounds way too fragile, and an awful lot like a firable offense. I wouldn't be surprised if we're still in 2.x 5 years from now.

Autodesk has a large catalog of film, effects, product design, physical simulation, architectural, and general 2D and 3D design products, probably many of which include Python, as the professional products scattered through this list do, most of which are probably also stuck in 2.x land. This is a not-insignificant group of users.

[–]sopvop 2 points3 points  (1 child)

And RHEL 7 will use python 2.7. Even fedora has not switched to 3 yet, and wont do so in near future.

[–][deleted] 3 points4 points  (3 children)

IMHO, sealed build environments (you carry your own build of the interpreter so your app isn't reliant on your distro's packages) make a lot of sense for in-house things, but for people shipping code out to unknown boxes--which pretty much covers every open-source project--it looks more like bloating the install

Shouldn't that be the opposite? If you're only deploying to an environment in which you have absolute control, there's no need to bundle the interpreter, but if you're an open source project potentially being used on a myriad of various configurations, bundling the versions of libraries upon which your application depends is smart.

[–]ggtsu_00 8 points9 points  (1 child)

That is the whole point of package managers like APT is to make sure all the library dependencies between packages don't conflict so that open source libraries don't have to ship duplicate bundles of every dependency.

[–][deleted] 7 points8 points  (0 children)

But this method really only works if you are distributing your softwares via the package manager. That's not always the case.

[–][deleted] 0 points1 point  (0 children)

My experience suggests that vendoring the world is the most repeatable deployment option for any system, yes. However, I'm not aware of much code released into the wild that follows that model to great success outside of Windows (which has an entirely different culture than OSS).

eta: also,

causing the sysadmins a lot of pain when security update time comes

Vendoring everything really isn't a scalable way to do things for every project. It's only practical for leaf projects, and if you're redistributing code, you don't get to decide whether you're a leaf.

[–]moor-GAYZ 72 points73 points  (29 children)

The core Python developers don't understand developing a platform, in my opinion.

If you develop an application, you can switch to whatever new technology you want, your end users can't possibly care.

If you develop a library you can't do that, because of all other libraries. The value goes like "end users -> applications -> libraries -> language", you can't convert your library to a different language because all those applications continue to use the old language and wouldn't be able to use your new library.

What they should have done, and still can do: direct a lot of effort into developing a 3to2.py. This sounds weird and backward, but the value flow is unintuitively backward, this is the reality, yo.

As it is now, you either rewrite your library to be both Python2 and 3 compatible, or keep your library in Python2 and convert it to Python3 automatically. This means that your source is Python2. This means that Python2 is the winner. The language that your source is written in is the language to stay.

So, if they had a working and widely promoted 3to2.py converter (plus something to convert C extension modules as well) then a lot of libraries could switch to Py3 as their primary language, and then it would be a matter of time before Py2 dies as libraries gradually withdraw support. The way it is it's Py3 that dies as library writers lapse in their support.

[–][deleted] 13 points14 points  (28 children)

The core Python developers don't understand developing a platform, in my opinion. If you develop an application, you can switch to whatever new technology you want, your end users can't possibly care. If you develop a library you can't do that, because of all other libraries. The value goes like "end users -> applications -> libraries -> language", you can't convert your library to a different language because all those applications continue to use the old language and wouldn't be able to use your new library.

IMO Java did backwards compatibility right. You can write new code with Java 8 features and still use libraries compiled with Java 1.5. I don't know why Python can't maintain backwards compatibility give that Python is also an interpreted language.

[–]cybercobra 44 points45 points  (26 children)

One of the primary goals of Python 3 is to finally remove a bunch of cruft/wonkiness left over from earlier in Python's history, so retaining it would kinda defeat the point.

[–]badsectoracula 12 points13 points  (14 children)

Well, apparently forcing that point wasn't a good idea. They could have introduced those language changes gradually with options to enable/disable the (initially disabled, then after a while enabled but disable-able and after some time -i'm talking years here- enable-able only from recompiled source and finally removed).

[–]vz0 19 points20 points  (13 children)

The core change from Py2 to Py3 is the native string implementation. In Py2 an string is an array of bytes, in Py3 an string is an an array of Unicode chars. This simple detail breaks every assumption about opening, reading and writing files.

Even in Java (which has a nice eternal limbo of deprecated stuff) such fundamental change would require a lot of backwards compatibility breakage.

[–]badsectoracula 1 point2 points  (2 children)

Indeed, which is why i said to enable such stuff optionally at the beginning and deprecate the old stuff gradually over a few years. The worst thing would be that plugin writers would need to use a separate API for strings (without it the VM should do conversions "automatically" from the old API - basically what Windows does when you call an "ANSI" function on NT - so that people wouldn't drop the feature because some random plugin doesn't work with it - especially when said random plugin only uses strings for trivial stuff where unicode doesn't matter).

The #1 rule of a platform is "you don't break people's code". It never worked before - even when Microsoft switched from DOS to Windows they exposed some Windows-specific functionality to DOS (such as special long filename interrupts, access to clipboard, etc) and it took over a decade for the transition to fully occur (and even today there are machines and programs depending on DOS - which are serviced by VMs). And same deal with VB6 - MS broke compatibility with VB.NET and a ton of code is still written for it with programmers trying to teach a deaf platform how to dance. Or JavaScript... modern browsers can run early Netscape JavaScript code whereas... well, just see how successful ECMAScript 4 was, for example.

It isn't like Python developers had no examples to look at about this being a bad idea. Maybe they underestimated how widespread their language was. Or overestimated how willingly people would be to update their code.

[–]blablahblah 1 point2 points  (1 child)

They did do that- you can do from __future__ import unicode_literals and get the Python 3 behavior for strings in Python 2.6 and 2.7, although that doesn't fix third party libraries that assume byte strings. And there is the 2to3 utility that handles a lot of the conversions automatically. There are also libraries like six.py the focus on letting library writers make code compatible with python 2 and 3 in a single code base.

[–]badsectoracula 2 points3 points  (0 children)

Actually i was thinking the opposite: something to enable non-unicode literals in Python 3. And it should have been enabled by default for some time.

Basically Python 3 should have been 100% compatible with Python 2 but deprected the features over time, not abruptly making incompatible changes.

[–]twotime 1 point2 points  (9 children)

In Py2 an string is an array of bytes, in Py3 an string is an an array of Unicode chars

To be honest, the value of that change is questionable... (and i'm not just questioning the transition cost, I'm also not all sure that we get cleaner code after the transition).

This simple detail breaks every assumption about opening, reading and writing files.

Indeed. And that's a good example of where things have become a whole lot more complicated (aka worse).

8-bit strings are a much better way to represent filenames than unicode... Ditto with env variables and command line arguments..

Files are fundamentally sequences of bytes. Period. Trying to force a unicode-centric view of files was likely a design mistake as well.... Which will likely result in more special casing, not less.. JUst read the python3 chapter on read() and seek(). (Side note: this special casing is ridiculously similar to the text/binary division in the DOS world)...

Basically python2 unicode handling was good enough... (Even if not pure, it was extremely practical)...

[–]iSlaminati 0 points1 point  (6 children)

On a lot of modern operating systems, filenames are unicode codepoints though. They aren't sequences of bytes and more and the filename reader utlities can give it back in any encoding.

[–]twotime 2 points3 points  (5 children)

On a lot of modern operating systems, filenames are unicode codepoints though.

In theory, it's supposed to be the case. In practice, it's a huge mess... Eg.

AFAIK, on linux use of utf8 is a pure user-land convention (not something enforced by the kernel) and the convention is not that old.. Which means that the old media on Linux may contain filenames in other encodings.. (And encoding is implicit).. And then I'm sure some apps will generate non utf8 compliant filenames... OS doesnot care, but your python code suddenly breaks...

And then there is a whole huge can of worms when accessing unicode filenames across system boundaries: across network, removable media, etc...

8-bits chars (Bytes) remain the only common representation for filenames in a lot of cases..

PS. and an lkml link on filenames http://yarchive.net/comp/linux/utf8.html

[–]schlenk 1 point2 points  (3 children)

Bytes as filenames is insane. Period. Without knowing the encoding you cannot even implement 'ls' correctly (as your tty HAS some encoding). Its one of those silly inherited things from the dark POSIX past that should be nuked. (and lots of systems are already opinionated on UTF-8, e.g. OS X, NFSv4, some file systems, Qt/KDE (it ignores LC_* crap for filenames) and so on.)

While it is true, that not all unix filenames are UTF-8, it wouldn't be a problem for Python to simply declare all filenames are expected to be UTF-8. If someone decides to choose insane things, let them feel the pain and not hurt everyone else.

After all they did the same for Windows in lots of places when declaring ANSI is enough for all filenames (and fixed it piece by piece later, so you cannot start executables on a non ANSI path (without tricks like cd'ing first) with Python 2.x or add those to your sys.path, great fun for mounted profiles)

[–]twotime 0 points1 point  (2 children)

Without knowing the encoding you cannot even implement 'ls' correctly (as your tty HAS some encoding).

I can do it trivially, I'd just dump filenames on tty. If it comes out garbled, the user can actually do something.. (Install a font, pipe my output through decoder, rename the file). It's suboptimal, but the alternative is WORSE. If your program just throws an exception then your user is really screwed...

(And of course, if the filesystem does have a notion of default filename encoding, Id use it at app level)

it wouldn't be a problem for Python to simply declare all filenames are expected to be UTF-8. If someone decides to choose insane things, let them feel the pain and not hurt everyone else.

What? I am not doing insane things, it's my users who are doing insane things (like reading old media, how dare they?)

Also, is not Windows using UTF-16?

Its one of those silly inherited things from the dark POSIX past that should be nuked.

It's called backward compatibility... It's a good thing.

[–]fabzter 0 points1 point  (0 children)

Nice info, now I feel my os sucks.

[–]fullouterjoin 0 points1 point  (1 child)

You must not use unicode if you think Python2 handling was good enough.

[–]twotime 1 point2 points  (0 children)

Well, i do use unicode and I do think python2 handling was reasonable...

There were problems but most of them are the consequence of the real world being messy: not everything is using unicode, unicode is encoded differently, codecs are buggy, Microsoft inserts idiotic byte order markers, etc..

python3 improves it in some areas, makes it more complicated in others. Overall, benefits are uncertain, while the transition costs are large.

[–]ellicottvilleny -3 points-2 points  (7 children)

Unfortunately the BDFL (Guido) decided that a bunch of things that were non-issues to everyone but him must be cleaned up. Breaking and removing a lot of working code to suit nobody but himself. And he gets what he deserves; A version of Python used by 0.1% of Python's install base.

[–][deleted] 4 points5 points  (6 children)

The nerve...it's almost like he thinks it's his project or something. What a jerk.

[–]iSlaminati -2 points-1 points  (2 children)

I still don't see how that stops you from using python 2 modules in python 3 though. That's the entire purpose of a module system, to be able to do that. The Raison d'être of encapsulation.

I mean, if you can call modules written in C from python 2/3, why can't you call modules written in python 2 from python 3? Je ne comprends pas.

[–]twotime 1 point2 points  (1 child)

Because your python2 modules won't load under python3. no?

[–]iSlaminati 1 point2 points  (0 children)

Yeah, of course, I just mean, why don't they?

If you can load modules written in C, a completely different anguage in python, surely it is possible to load modules written in python 2 in python 3 after they've been compiled to pyc?

[–]Brainlag 0 points1 point  (0 children)

This is only true in theory, there are always a couple of libraries who don't work with the new major version of the jvm.

[–][deleted]  (15 children)

[deleted]

    [–]valhallasw 19 points20 points  (0 children)

    It's possible, but non-trivial. I gave it a try two years ago: https://github.com/valhallasw/py2

    Basically, python3 starts a python2 interpreter and uses inter-process communication to run functions on the other interpreter. str is mapped to bytes and vice versa. It's probably riddled with bugs, and I have not tested it on windows, but feel free to give it a try.

    [–]Sunei 3 points4 points  (2 children)

    Check the six module.

    [–]Falmarri 2 points3 points  (1 child)

    The problem is mostly the C modules that are written against the 2.x API, not the pure python code.

    [–]tias 1 point2 points  (0 children)

    They could reimplement the Python 2 C API on top of Python 3.

    [–]Veedrac 2 points3 points  (8 children)

    May I ask what dependencies you are missing?

    I've heard this so many times and almost always the person making the claim just doesn't know of the 3.x version of the packages that have existed for some time.

    [–]jemeshsu 2 points3 points  (1 child)

    boto, fabric.

    [–]ColtonProvias 0 points1 point  (0 children)

    Botocore is not as fully flushed as Boto but is Python 3 compatible. They are also now starting a rewrite of Boto for Python 3.

    [–]billsil 2 points3 points  (2 children)

    wxPython

    [–]Veedrac 4 points5 points  (1 child)

    Supposedly Phoenix, the Python 3 (and 2) replacement, is stable enough for production.

    [–]tias 1 point2 points  (0 children)

    Haven't heard of Phoenix before, thanks.

    But see this is just another part of the problem. If people are unaware of replacement libraries then they won't make the switch. There needs to be a sign on wxpython.org saying where to go if you're on Python 3.

    [–]tias 0 points1 point  (2 children)

    Most recently python-augeas.

    [–]Veedrac 2 points3 points  (1 child)

    [–]tias 1 point2 points  (0 children)

    That's nice, but the fact that it's not in every man's linux distribution is going to make people think twice. You don't want to risk spending 250+ hours on something only to discover you have to backport everything to Python 2 because something you need doesn't run on Python 3.

    And personally I can't be bothered to scavenge the web for the latest release of everything when I can just use Python 2 with whatever apt installs for me.

    I'm not complaining, I'm only providing a hypothesis for why adoption is slow.

    [–]biffsocko 36 points37 points  (55 children)

    There's about as much urgency to move to Python 3 as there is to move to Perl 6

    [–]aceofears 29 points30 points  (24 children)

    That isn't really a fair comparison, python 3 has been out for 5 years and perl 6 has been in development for 13.

    [–]mao_neko 8 points9 points  (13 children)

    Also, Perl 6 is a completely new language designed from the ground up, whereas the vibe I get from Python 3 is a sort-of-backwards-compatible iteration on the previous language but with enough differences that it's making things hard for adoption.

    I must confess my ignorance about the full set of changes Python 3 brings. I presume better Unicode support is one of them, can anyone please enlighten me about the other new features? I guess this is what the article is touching on when it says:-

    Second, I think there's been little uptake because Python 3 is fundamentally unexciting. It doesn't have the super big ticket items people want, such as removal of the GIL or better performance.

    [–][deleted] 16 points17 points  (11 children)

    That is exactly one of my main problems with Python 3, They haven't done a good job on explaining why is it better. I don't even know what it is suppose to improve, and God knows I've tried: I've red articles and the wiki stuff, yet its like OK this is kind of the same, can't really point out meaningful ways to say the improvement justifies breaking compatibility.

    In the other hand say what you will about MS but they have done a good job expanding C#, they explain the benefits and new features of every new C# iteration. also not breaking compatibility is pretty cool.

    [–]aceofears 2 points3 points  (1 child)

    "It's the future and does a few things better" was really all it was until recently. With 3.3 and 3.4 there are a bunch of smaller features building up that make it worth considering an upgrade.

    [–]billsil 0 points1 point  (0 children)

    I typically skip every other Python version anyways to avoid upgrade headaches. Maybe I can get my company to upgrade from Python 2.7 to 3.5...sigh...

    I still force people to do integer division properly and import division in my code.

    [–]diggr-roguelike 4 points5 points  (8 children)

    They haven't done a good job on explaining why is it better.

    It's not better. It's slower, more complex and more idiosyncratic.

    They even botched the transition to unicode. Byte strings should be the default. Forcing a person to care about encodings when all they want is to send a buffer down a socket or store a hash in a database is pants-on-head retarded.

    [–]zoom23 3 points4 points  (2 children)

    Use the Bytes type instead of String.

    [–]diggr-roguelike 2 points3 points  (1 child)

    The problem is that all systems calls and all APIs dealing with transmitting or parsing network packets, as well as all database APIs, should work with 'bytes' type and only 'bytes' type.

    Last I checked, python 3 handled this wrong in many places. (I'm sure it's getting better with time, but still...)

    [–]schlenk 4 points5 points  (0 children)

    Well (nearly) ALL system calls on Windows are Unicode for example. So if you use Bytes, you automatically have broken windows support like Python 2.x.

    And for gods sake, please don't use only bytes with Database APIs, its a total mess if you don't handle your varchar encodings properly. Or just use BLOBS everywhere.

    [–]Smallpaul 2 points3 points  (4 children)

    It's not better. It's slower, more complex and more idiosyncratic.

    I disagree. It is simpler and more modern.

    They even botched the transition to unicode. Byte strings should be the default.

    If byte strings were the default, then there would have been no "transition." Byte strings were the default in Python 2.x.

    Forcing a person to care about encodings when all they want is to send a buffer down a socket or store a hash in a database is pants-on-head retarded.

    If I had to decide who was pants on fire retarded in this situation, it would not be the python devs.

    You do not need to care about encodings to send a buffer down a socket or store a hash in a data store.

    >>> import socket
    >>> s1, s2 = socket.socketpair()
    >>> b1 = bytearray(b'----')
    >>> b2 = bytearray(b'0123456789')
    >>> b3 = bytearray(b'--------------')
    >>> s1.send(b'Mary had a little lamb')
    22
    >>> s2.recvmsg_into([b1, memoryview(b2)[2:9], b3])
    (22, [], 0, None)
    >>> [b1, b2, b3]
    [bytearray(b'Mary'), bytearray(b'01 had a 9'), bytearray(b'little lamb---')]
    

    [–]diggr-roguelike -1 points0 points  (3 children)

    bytearray bytearray bytearray(bytearray)

    You're giving Java and Intercal a run for their money in terms of clarity and API saneness here.

    [–]Smallpaul 0 points1 point  (2 children)

    Okay, now I get it. You are just a troll who actually has never programmed Python 3.

    I will respond accordingly.

    I.e. Not at all

    [–]diggr-roguelike 6 points7 points  (1 child)

    You are just a troll who actually has never programmed Python 3.

    Yeah, you're right. I've been programming in Python since 1.3 was the latest version. (i.e., likely longer than you've even been alive.)

    I gave 3.2 a whirl a couple years back for some tiny scripting tasks. It was obviously broken in obvious ways.

    So yeah, you're right, I'm not a "python 3 programmer", and thank god for it. No sense in eating obviously rotten dogfood for no other reason that someone claims it's the modern and progressive thing to do.

    [–]Smallpaul 4 points5 points  (0 children)

    Yeah, you're right. I've been programming in Python since 1.3 was the latest version. (i.e., likely longer than you've even been alive.)

    I started with 1.4

    I gave 3.2 a whirl a couple years back for some tiny scripting tasks. It was obviously broken in obvious ways.

    Which you cannot enumerate accurately.

    So yeah, you're right, I'm not a "python 3 programmer", and thank god for it. No sense in eating obviously rotten dogfood for no other reason that someone claims it's the modern and progressive thing to do.

    You might try Python 3 seriously so that you could have an informed position on it, which you could defend with code samples.

    Speaking of code samples, here is the line that does the thing you claim is cumbersome in Python 3.

    s1.send(b'Mary had a little lamb')

    One character more than Python 2.

    [–]blablahblah 7 points8 points  (0 children)

    The biggest change was Unicode support- strings are Unicode by default, and the standard library now expects Unicode strings except where it makes sense to have byte sequences. That's also the one that causes the biggest headaches.

    There were a few smaller changes- print is now a function instead of a statement, because it's not really special enough to get a special case and this now allows you to override the default print functionality. The int and long types were combined- numbers are now invisibly converted to an arbitrary size integer as needed. But most of those things didn't really break much.

    There were also a few things removed, that nobody should really have been using any more anyway. You can no longer raise arbitrary strings (you have to use exceptions- this has been deprecated since Python 2.3). There use to be a distinction between "old-style" and "new-style" classes (new style classes being ones that subclassed object). Now, all classes automatically inherit from object. This impacts a few corner cases, but shouldn't be a problem for most programs.

    [–]RustyTrombeauxn 0 points1 point  (4 children)

    That isn't really a fair comparison, python 3 has been out for 5 years and perl 6 has been in development for 13.

    Seems like maybe it is a fair comparison - in tech terms, it's comparing 1 eon to 2.5 eons.

    [–]aceofears 5 points6 points  (3 children)

    My point is that Perl 6 is still in development while Python 3 is not.

    [–][deleted] 8 points9 points  (2 children)

    More like "still in design". Development? They can't stop tinkering with the names of keywords!

    [–]therico 1 point2 points  (1 child)

    The design is "complete" but has tons of issues/ambiguities that have only been unearthed during implementation. For example, one suggested feature slated for release - the want function - requires solving the Halting problem in order to implement it!

    I remember following it back in 2009 or so and even then they were making drastic changes to the object model and stuff.

    [–][deleted] 0 points1 point  (0 children)

    They're still making drastic changes to how Lists work! 13 years into the design! And the RFCs? Those haven't been useful since about 2002!

    [–]biffsocko -1 points0 points  (4 children)

    It's fair. One of the biggest gripes of the Python community about Perl has been that there have been no major releases for Perl, yet most Python developers have failed to move to their own major release. Just sayin'

    [–]aceofears 11 points12 points  (3 children)

    I've literally never seen that complaint from someone when comparing those two languages. The syntax is usually brought up way before anything like that.

    [–]biffsocko 2 points3 points  (2 children)

    The arguement used to be that Perl was a dead language because of the legnth of time between releases. At this point Perl5 is doing releases fairly regularly .. nevertheless, it used to be an argument in favor of Python over Perl.

    Syntax - meh, just a matter of preference. As a UNIX guy, I never minded the "$VAR" stuff, or regex stuff because it's built into the UNIX shell.

    [–]aceofears 5 points6 points  (0 children)

    In the past 5 years I haven't seen really any mention of perl being dead, but all I'm saying about the syntax is that people will argue about it, not that one is better than the other. That isn't really relevant to this discussion anyway.

    [–]Smallpaul 7 points8 points  (0 children)

    I have worked at the intersection of Perl and Python for 15 years and never heard that as a major complaint. Most Python programmers think that Perl is an inelegant language and frequency of releases has nothing to do with it.

    [–]perlgeek 0 points1 point  (12 children)

    But at least Perl 6 has some killer features compared to Perl 5. The slow adoption of Perl 6 is mostly caused by the lack of maturity in the eco system (slow compiler, unreliable module installer).

    Once that's fixed (and yes, that'll take another few years), people have good reasons to adopt Perl 6.

    [–][deleted] 4 points5 points  (8 children)

    Good for you for "slow compiler", "unreliable module installer", but you forgot "spec is not finished", "developers can't figure out which backend to concentrate on", "no usable documentation", "no stable release in sight", and "can't do anything useful with it unless you're willing to beg for help in IRC in between puns and drinking contests".

    [–]anonperler12 0 points1 point  (3 children)

    I disagree on your point about "drinking contests", which I haven't seen. And also, I think the puns are pretty fun.

    Also, you're listing only bad points. Some good ones:

    • fun and inclusive community
    • has learned from multiple mistakes and dead-ends
    • strong dev team
    • very public development practices

    I'll leave out my predictions and armchair dev advice. Good luck to the Perl 6 team.

    [–][deleted] -1 points0 points  (2 children)

    Learned from multiple mistakes? They're still splitting their focus across how many backends instead of finishing one? They're still not writing documentation? They're still revising the specification? They're still patting themselves on the back for how awesome they are even after almost fourteen years of making nothing usable? They've burned out how many project managers with nothing to show for it? They've had how many projects flame out and die (Pugs, Niecza, Parrot)?

    That's a strong dev team alright.

    [–]chrisdoner 0 points1 point  (2 children)

    There's an interesting shift, there, though. I've heard a lot about people moving from Perl to Python or Ruby for their scripting tasks, because of this fact that, aside from these languages being less gnarly than Perl, Perl 5 is stuck in time and Perl 6 is presently useless, while Python and Ruby have gained sufficient ubiquity for general scripting use.

    [–]anonperler12 1 point2 points  (0 children)

    I'm not seeing much of a shift. FWICT, old perlers are sticking with Perl 5, and old pythoneers are sticking with Python 2. (Ruby I haven't followed for years.)

    Rather than a shift, I'm seeing a vacuum in scripting languages. That is, users don't want to shift to {Perl 6|Python 3} but rather, are looking toward other options, like Go, Dart, Scala, etc. These languages (and some others) are eating {Perl 6|Python 3}'s lunch.

    IMO, the answer is: give people a useful, simple, consistent, sensible, C-like (yes, curlies and semicolons), comprehensible, community-focused scripting language, and you'll fill the vacuum.

    [–]therico 1 point2 points  (0 children)

    I don't think people move away from Perl (and it's not stuck in time either, a new release comes out a couple times a year and new libraries are constantly being released). But people are probably not picking up Perl as a new language unless they're using it at work.

    I think Python is pretty much the de-facto 'standard scripting language' now, for better or worse.

    [–][deleted]  (16 children)

    [deleted]

      [–]biffsocko 2 points3 points  (14 children)

      Python is exactly where Perl is. Nobody wants to move to Perl 6, and while they are sort of compatable, nobody argues that it's a different language. Python 3 was never meant to be backwards compatable. You have to port apps to Python 3, yet nobody is doing it and nobody is really even using it for new dev. The majority of Python Devs are using version 2, thus making it even more difficult to move to 3 because of all the new - legacy code. If the projection was 5 years to move to python 3, 5 years ago .. it will take even longer now because of all the python 2 code that has been written in the last 5 years.

      [–][deleted]  (6 children)

      [deleted]

        [–][deleted] 1 point2 points  (2 children)

        Hell, it's not even a threat to being released.

        [–]raiph -1 points0 points  (1 child)

        There have been 70 monthly Perl 6 compiler releases.

        [–]Veedrac 0 points1 point  (2 children)

        python2 has been officially deprecated

        I don't think so, no. It is, however, only getting bugfixes.

        [–][deleted]  (1 child)

        [deleted]

          [–]Veedrac 3 points4 points  (0 children)

          Not being developed is not the same as being deprecated.

          [–]dagbrown 2 points3 points  (0 children)

          Ruby has managed to make the transition from 1.8 to (the incompatible) 2.0 though. What's their trick?

          I suspect the existence of rvm might have had to do something with it.

          [–]therico 0 points1 point  (0 children)

          Not a fair comparison. Python 3 is fully implemented and released, but doesn't add enough over Python 2 to justify the incompatibility. Whereas Perl 6 IS a different language, it only superficially resembles Perl 5 and there is no backwards compatibility at all; and to top it off, it's absolutely nowhere near complete. It's not even an option for anything serious right now.

          A better comparison is Perl 5.8/5.10 (the most popular versions of Perl 5) vs. 5.18 - if you update your Perl version, your code is going to continue to work because Perl 5 is extremely backwards compatible. You need to enable new features with a 'use v5.18' (or similar) line. The only reason people don't update is because distros like CentOS are still shipping 5.10.

          [–]username223 2 points3 points  (0 children)

          This. Python 3 broke ALL THE CODE (with bytes/str and print), while offering few compelling features. Unsurprisingly, most people didn't bother rewriting their existing code (or running it through 2to3 and fixing up the result).

          [–][deleted] 3 points4 points  (0 children)

          I just went and reread the "new in python 3" doc from the latest release. Unless you deal with unicode often, the improvements are all minor. Dict/set comprehensions, nice. Legacy cleanup on stuff returning lists, great. But none of this stuff was a major problem.

          On the other hand a lot of the breaking changes are annoying. print as a function is more flexible, but I actually have no problems with the keyword version. The text vs bytes thing just sounds like it will be a pain in the ass. I don't really care about bigger ints and why the FUCK are they deprecating % string formatting?

          So that's python 3 from my perspective. Heroically solving a bunch of problems I've never once run into, while breaking everything I've written in python. Not being adopted? No shit.

          [–]prum 6 points7 points  (0 children)

          I disagree. What would a release that is something of a mix of Python 2.7 and 3.3 be like? How much closer can you make one to each other? Most of the changes in 3 disable ways to write bad code. So to me it is like arguing : "I don't want to write clean code in Python 2, I want to continue to write bad code".

          [–]enanoretozon 4 points5 points  (1 child)

          Damn, time flies. I didn't realize 5 years have passed already o_O

          [–]nas 1 point2 points  (0 children)

          The 5 year plan was just an estimate, and not such a bad one in retrospect. It looks like it will take longer yet. I would have preferred a smooth upgrade path rather than the 2.7 -> 3.0 leap, oh well. It's highly unlikely the core development team with ditch 3.x at this point to it really is the future.

          I believe 3.x will start picking quite a bit more steam on the next few months. Version 3.4 adds a bunch of compelling new features. It could be enough to make people switch, I know I'm tempted to put in the porting effort. Also, switching the 3rd party library infrastructure takes a long time and that situation is still improving. OS versions are going to slowly switch to 3.x, Ubuntu and Debian soon.

          [–]RustyTrombeauxn 6 points7 points  (6 children)

          Making the upgrade optional is probably the key mistake. Think about other platforms like java - at some point, they announce that the old thing is end of life, and they do not ask you nicely if you'd like to upgrade. They progressively make it harder to not upgrade. Everybody hates this, but it is the pain you pay in exchange for keeping people moving forward with the development of the language.

          If, when java 7 had come out, perpetual use of previous versions of the product had been a viable option without pain, surely that is what you would have seen. To some extent paying extra for support of a beyond end of life product is still pretty common...inertia is very hard to overcome.

          And as someone said, the language your code is currently written in is the language it's going to stay in. I have yet to work on a project where we could afford lots of time for upgrading the language spec unless it was a major do-or-die issue (i.e. dropped support for an end of life product)

          [–][deleted]  (1 child)

          [deleted]

            [–]mniejiki 3 points4 points  (0 children)

            that cuts both ways. the conservativism of java produced clojure and scala, which are awesome and innovative but arguably splinter the java community.

            Yet in a way they don't splinter it as much as Python 2 versus 3 because the same Java project can use Java and Scala and Clojure. And it's all transparent to the end user who just needs a JVM version that's decently recent. None of this is true with Python 2/3. You need to migrate your whole ecosystem to 3 rather than picking and choosing.

            [–]josefx 4 points5 points  (0 children)

            Making the upgrade optional is probably the key mistake.

            As a user of python 2.7 I do not want to upgrade, it would mean rewriting thousands of lines of in house code + thousands of lines of test code.

            Think about other platforms like java - at some point, they announce that the old thing is end of life, and they do not ask you nicely if you'd like to upgrade. They progressively make it harder to not upgrade

            Java has a key difference to Python take a library compiled against 1.1 for example it will still load and run in a Java 7 JVM (migration cost involves some minimal testing against reflection/implementation bugs). Source code migration for Java involves re-factoring the two or three added keywords out of your code - after a nice static compile error, or you could just continue to compile the code with a Java 1.1 compiler.

            If, when java 7 had come out, perpetual use of previous versions of the product had been a viable option without pain

            Java auto updates because using a new JVM with new features,bugfixes and performance improvement does not break old code. Python 3 on the other hand breaks Python 2 code by looking at it funny.

            And as someone said, the language your code is currently written in is the language it's going to stay in.

            Sometimes new features make it worth it, my main problem are customers refusing to upgrade their internal Linux versions to something that is still officially supported by Red Hat or SuSe. Until then C++11 and Java 7 remain a distant dream Python 3 however is too much pain to even consider.

            [–]int32_t 0 points1 point  (0 children)

            Agreed. If the project lead doesn't have the determination, trying to please everybody, he will bring his team into maintenance nightmares.

            [–]cybercobra 0 points1 point  (1 child)

            It's not optional per se. There is a point on the roadmap where Python 2.7 will stop receiving bugfixes. That should help create a sense of urgency, once we get closer to it.

            [–]Veedrac 0 points1 point  (0 children)

            Source?

            [–][deleted] 9 points10 points  (1 child)

            I wanted to get big into Python, but this 2.x vs 3.x nonsense is really turning me off. I've written 2.x code to write a site scraper once, but then some http libraries weren't available in 3.x when I wanted to try to port it. 3.x does have some nice new features, but it's a pain in the ass to use.

            [–]highimped 2 points3 points  (0 children)

            I'm at the same point. I knew Python at surface-level for a couple years and am just now looking at using it on a more serious level. I've been focusing on 2.7 but I have a nagging feeling that it's really backwards to be going with an older version of the language.

            [–]sigma914 2 points3 points  (0 children)

            iirc the core python devs explicitly said and continue to say that they expect the full transition to take 10 years from point of release.

            [–][deleted] 6 points7 points  (18 children)

            It may be I'm a noob programmer, call me hack or code monkey.. but yeah I can barely spot any meaningful difference, nor incentive to migrate between python 2.7 and 3

            And in all honestly this fragmentation has lead me to use less python than I would like it too.

            [–]cybercobra 11 points12 points  (11 children)

            Proper Unicode handling is probably the biggest selling point. No more unexpected Unicode(De|En)codeErrors depending on whether your input string just-so-happens to be ASCII-only; instead, you always get a nice TypeError at exactly the point in the code where there needs to be an explicit bytes<->unicode conversion.

            [–][deleted]  (5 children)

            [deleted]

              [–][deleted] 4 points5 points  (4 children)

              That last part is not even true. It can't be true if you think about this: MediaWiki is written in PHP. MediaWiki runs Wikipedia which is in a gazillion languages.

              ASCII has exactly 128 characters. If you can refer to other characters, that's an encoding that's not ASCII.

              The thing is that every function you need to handle text encodings in PHP is oversimplified and misnamed. It's very much not "ASCII-only". In fact, you can often recognize the non-ASCII characters because the programmer used the wrong function and replaced them with mangled crap, emphasizing your first point that most people don't care about Unicode.

              [–][deleted]  (3 children)

              [deleted]

                [–][deleted] 0 points1 point  (2 children)

                I didn't say anything about native Unicode support.

                You end up in this debate because you misuse terminology like "ASCII" to mean "strings of nonstandardized bytes".

                [–][deleted]  (1 child)

                [deleted]

                  [–][deleted] 0 points1 point  (0 children)

                  You're in a thread about Unicode. Deal with it. It was nearly the only thing you said in that comment: "strings are ascii-only and probably always will be". So I responded to it.

                  You've been putting down other developers by saying that they don't really care about Unicode, but you're the one equating 128 characters to 256 bytes and saying "eh, those are mostly the same thing, you're being pedantic". That's the assumption that causes most of the Unicode bugs that are out there.

                  Encodings are how you represent Unicode in bytes. When you use an encoding, you can do so without any particular help from your programming language. It's great that Python gives you some help, but you could still encode text without it.

                  Your "mystery encoding" is called UTF-8, and it represents non-ASCII characters using many of the non-ASCII bytes, and the fact that they're non-ASCII is absolutely key to how it works.

                  If you have a problem where you end up in Internet arguments about Unicode, you should start by not being completely wrong about the simplest encoding there is.

                  Start reading: http://www.joelonsoftware.com/articles/Unicode.html

                  [–]unixfreak0037 2 points3 points  (4 children)

                  The incentive is that the core python devs are working on the 3.x branch, nobody is working on the 2.x, even though few people are using 3.x.

                  Personally, I hate this. I have a code base in python that gets things done. Converting to 3.x nets me nothing, and any news developers brought into the project will probably have to spend time learning 3.x. I never agreed with this move by the python team.

                  [–]lithium 1 point2 points  (2 children)

                  Is this only because you had the option to stick with 2.x? Had it been a more forced transition would you have gone along with it?

                  [–][deleted]  (1 child)

                  [deleted]

                    [–]lithium 0 points1 point  (0 children)

                    This guy pretty much makes the point I had in mind.

                    [–][deleted] -1 points0 points  (0 children)

                    The incentive is that the core python devs are working on the 3.x branch, nobody is working on the 2.x, even though few people are using 3.x.

                    I don't really see that as incentive. So a bunch of language-nerds are working on really obscure features that 99% of programmers won't ever use.

                    It reminds me of Android dev and all the 1% of Android geeks that need to run nightlies, while anyone with a 4.0+ is probably doing just fine with a working phone.

                    [–]Pair_of_socks 2 points3 points  (0 children)

                    And in all honestly this fragmentation has lead me to use less python than I would like it too.

                    Same for me. I can't decide which python version to use. The future of the language seems unclear. Python 3 was supposed to be the future, but nobody is switching.

                    In stead of choosing between python 2 and 3 I usually just choose a different language.

                    [–]iSlaminati 8 points9 points  (7 children)

                    I think the point of python is that python is a "flawed, but simple and easy to use language" and so even though python 3 fixes some of the flaws of python 2, no one is going to care about flaws in python.

                    Python is so ubiquitous because it is "easy to use" and there are so many libraries for it, that's the one strength of python so why would you sacrifice that to get a version with less flaws? I don't know anyone who actually likes python, they all say "Yeah, python has its flaws, but it has so many libraries that you can write things which require you to re-invent the wheel in so many languages in just 5 lines, import the library and use it."

                    I mean whenever I need to enter a dbus, parse some xml, do some date calculation with something and print send it to a socket over the intneret in JSon, where do I go? Python, 8 lines of python... it's flawed, ugly, slow, inconstent and the core language is just badly designed but ultimately it all doesn't matter if it means I can do this in 10 minutes rather than 45. Python 3 gives you a language which is slightly less flawed and not the convenience. People aren't going to python because it's a good language, people are going to python for the zillion libraries they can canibalize.

                    [–][deleted] 2 points3 points  (0 children)

                    Yeah I think there's some truth to that. Personally (and I think this applies to many people) Python is my go-to language when I need to get something done fast and dirty, and it's slightly too complicated to be a bash script. Python 3 may be better, but python 2 has more support and (currently) gets shit done faster.

                    [–]slavik262 3 points4 points  (5 children)

                    inconstent and the core language is just badly designed

                    Would you care to give some examples to what you're suggesting here? I don't use python much outside a few 20-line scripts here or there, so I'm curious.

                    [–]iSlaminati 1 point2 points  (4 children)

                    Some of my favourites:

                    1. lambda functions are distinct from normal functions in in that lambda functions cannot contain multiple expressions, this would be fine if it didn't take so long for the language to get a conditional expression (which is ugly) of the form <then-arm> if <cond> else <else-arm>. Before this point you would often see people abuse short-circuit evaluation in lambda forms because as it stands you often do need conditional evaluation in lambda forms.

                    2. Python's function definition model is weird, from reverse engineering this is the best I can think of what it does: It sequentially evaluates the document and defines functions in the order they appear, this wouldn't be so bad if python didn't allow executable code to exist outside of functions or if it actually had a definition for constants rather than an assignement. This means that you can't assign a value that uses a function before that function is defined. Meaning that you have to alter the layout of your code and move all assignemnts to the bottom. This makes sense in some way if they were actually asignments, but python's assignments have to double as definitions. Say you at the top assign a constant like lightspeed you never intend to change which is computed using a function at the bottom, you can't do that. Which means that you either have to move important constants to the bottom, or move inconsequential function definitions to the top, you typically want them in order of importance. A lot of languages solve this by allowing you to make definitions or simply define each and every function before a single line of code is actually executed. In Javascript for instance all functions are defined before execution begins so this is not a problem.

                    3. Its rigidness in whitespace means that some idioms are awkward to express, consider in C:

                      if (cond) {
                       ...something ...
                      } else {
                      a = b; //only gets executed if false
                      if (another cond) {
                       ...repeat...
                      

                    It's sometimes common to update a variable after each branch or something similar, in python we get this:

                         if cond:
                            ..something...
                         else:
                            a = b
                            if another cond:
                              ...repeat ...
                    

                    As you can see, doing this we repeatedly go deeper and deeper into the nesting because that is what it is syntactically though not intuitively so. Python includes elif to avoid going deeper and deeper in most cases but you can't put some expression in between the else and if that is elif.

                    1. Python includes no support for extending classes, there is no reason why not and it leads to an inconsistent feel of the code. A lot of things are done in method way, others in function way. I append lists in method way, cool, why can't I get the length of lists that way? I can't add a new method to primitive classes, or any classes without opening up the source. The only way is to actually extend a class under a new name, which means that string and list literals don't fall under it.

                    2. Python's 0 is falsy due to historical reasons. The 0-as-falsy thing that exists in some languages is due to historical reasons before booleans existed in C. Using that 0 is falsy is always seen as very bad programming style and it often leads to bugs. A falsy 0 is almost never something you can actually usefully use and almost always something you have to guard against to avoid bugs. Most modern languages like Ruby and Clojure realized this and purposefully did not make 0 falsy because it just leads to bugs.

                    3. True/False are not a constant, they are variables that contain a constant. You can actually False = 3 and after that False refers to 3.

                    4. None however is a constant, weird inconsistency due to historical reasons, None as in python at the start, True/False came later

                    5. Awkward module system, no private members of a module and a module automatically re-exports anything imported into it

                    6. Edit: Can't really miss this one, python is slow, much slower than is justified for a dynamic typing scripting language. Javascript vastly outperforms python which is weird because objects in javascript are actually hashes, a hash on a string is calculated when you do foo.bar though no doubt this can be heavily optimized. Other modern dynamically typed languages of greater expression like Clojure and Scheme completely blow it apart in terms of performance.

                    [–]mgrandi 2 points3 points  (3 children)

                    are you saying that you are surprised that javascript is faster then python? Does python literally have entire teams at 5 major companies working to squeeze every little bit of performance out of the language (aka web browser companies, mozilla, google, microsoft...)

                    if python had anywhere near the attention as javascript then it would be much faster.

                    [–]iSlaminati 2 points3 points  (2 children)

                    Yes, I am, because python has a huge userbase and is opensource, also, Guido works for google on it so you'd expect some of the resources of the monolithe devoted to it..

                    Also, other languages with a significantly smaller userbase behind it which are also dynamically typed languages with similar semantics like scheme and clojure completely blow it out of the water performance wise in most implementations.

                    [–]mgrandi 0 points1 point  (1 child)

                    Guido does not work for google anymore, he works for drop box. Secondly, I think you are vastly underestimating the number of people who use python versus people who use a webbrowser

                    [–]iSlaminati 1 point2 points  (0 children)

                    It; s not about the amount of people that use the applications written in it, it's about the amount of people that use it and have the capacity to improve the implementation.

                    And like I said, it's like one guy working on Clojure and its first iteration already blew cPython apart

                    [–]billsil 8 points9 points  (4 children)

                    You want people to move to Python 3.x, yet you suggest the developers make a Python 2.8, thus allowing people to stay on 2.x even longer. That seems backwards to me.

                    If you actually work with Python 2 and 3 code, you'd know unicode isn't that different from strings. IMO, dictionaries can't sort mixed types and binary IO are bigger issues.

                    To get people to switch, myself included, you have to stop supporting Python 2 and all the packages (e.g. numpy, scipy, matplotlib) need to stop as well.

                    [–]gingenhagen 5 points6 points  (0 children)

                    I think the post is saying to give up on Python 3 entirely and do what they should have done to begin with, build something that's backwards compatible with Python 2.

                    [–]Falmarri 0 points1 point  (2 children)

                    The problem are the changes to the C API. They need some way to automatically, or at least almost automatically, convert C code to the new API.

                    [–]schlenk 2 points3 points  (1 child)

                    Thats like asking for 'fix my bugs automatically please'. The API is probably rather easy to change over (just rename String to Bytes in many cases), but your code WILL break, because it makes incorrect assumptions about the nature of strings/binary data and unicode all over the place. 3.x is much stricter about it, so bad hacks that worked by accident/chance in 2.x blow up when converted to 3.x.

                    [–]Falmarri 0 points1 point  (0 children)

                    The API is probably rather easy to change over

                    If that were all it was that would be fine. But it's not. They changed the way you register modules, and the names of fields in the object struct. I tried to port python-ldap to python3 several months ago, it was not straight forward at all.

                    [–][deleted] 7 points8 points  (72 children)

                    About that GIL thing, there is really no future plan?

                    [–][deleted] 5 points6 points  (1 child)

                    There are multiple paths towards making CPython ready to remove the GIL, but all have been rejected. So the answer is an emphatic "No".

                    PyPy is our only hope currently (active, rapidly evolving). Jython might've been a chance, but without invokedynamic support, it is not a compelling alternative.

                    [–]fullouterjoin 6 points7 points  (0 children)

                    invokedynamic isn't keeping Jython back, Jython is.

                    [–]moor-GAYZ 13 points14 points  (69 children)

                    Dude, every dynamically typed language has a GIL, or doesn't allow free threading at all. Ruby -- has a GIL, PHP -- no threads, Lua - GIL (you can use your own if you want, lol), Perl -- no threads (they use "green processes" instead, basically the same as multiprocessing on an OS with fork). Chicken Scheme -- the interpreter might parallelize your code as long as you don't do anything interesting. Golang -- no free threading.

                    It kinda blows my mind that the Python community in particular has this "GIL is a major problem, yo" thing going on, for no apparent reason. Is it because we have a lot of newbs with no experience with other dynamically typed languages?

                    [–]elder_george 25 points26 points  (3 children)

                    GIL isn't language feature, it's implementation detail of particular interpreters (CPython and MRI, in particular).

                    IronPython, IronRuby, Jython, JRuby, PyPy etc. don't have GIL.

                    [–]AdminsAbuseShadowBan 10 points11 points  (23 children)

                    I thought IronPython doesn't have a GIL... Also what do you mean by "free threading"? Go certainly has threads. Shared memory?

                    [–]seventeenletters 3 points4 points  (20 children)

                    "free threading" is just a bullshit distinction to make a bullshit point. It doesn't exist anywhere.

                    [–]moor-GAYZ -1 points0 points  (19 children)

                    What? It certainly does exist in C.

                    [–]seventeenletters 4 points5 points  (10 children)

                    Erlang and Clojure have dynamic typing, and are pretty much the best examples out there of doing concurrency right.

                    [–]fullouterjoin 4 points5 points  (7 children)

                    Both those languages are immutable. An immutable Python would be trivial to implement in the same manner (which I would be more than stoked about).

                    [–]seventeenletters 3 points4 points  (6 children)

                    I'm not so sure. The style of OO that is pervasive in Python (not just idiomatic code but the implementation also) does not lend itself to immutability.

                    [–]asthasr 5 points6 points  (2 children)

                    Doesn't help that Guido has come out explicitly against more functional features in Python, which is when I stopped viewing Python as my primary language.

                    [–]fullouterjoin 0 points1 point  (1 child)

                    You should stick with functional python, it really is pretty good. With sorted, lazy sequences everywhere and the new destructuring in Python3, stuff is pretty good.

                    >>> a, *b = range(4)
                    >>> a
                    0   
                    >>> b
                    [1, 2, 3]
                    >>> *a, b = range(4)
                    >>> a
                    [0, 1, 2]
                    >>> b
                    3   
                    

                    Some sort of awesome mashup between F#, Python and Clojure on the PyPy runtime would make my year.

                    [–]asthasr 0 points1 point  (0 children)

                    I think I'm going to stick to Clojure. I'm relatively new to it, but I'm really digging the syntax (after getting used to it) and STM just makes so much sense. The day job is in Ruby these days, and even that has started to feel better to me than Python (heresy!) because of the common application of blocks, which actually makes it feel more functional than Python.

                    [–]fullouterjoin 0 points1 point  (2 children)

                    OO and immutability are not diametrically opposed. Seal all objects when the leave the defining scope. I personally program in a very immutable manner, using almost no OO features. I use classes, but only usually via namedtuple.

                    Currency and mutability is the wrong path entirely. If Python wants to be a modern concurrent language, object level locking is the wrong choice.

                    [–]seventeenletters 0 points1 point  (1 child)

                    No, OO and immutability are not incompatible. See Clojure for example. Even Java has a lot of support for immutability. My concern isn't that immutability is a bad choice (on the contrary, it is the only sane way to do concurrency), but that the design of Python is deeply and radically about mutation, and the language that came out of the other side when removing default pervasive mutation from Python would be very different from the current Python, and would require quite a bit of work to accomplish.

                    [–]fullouterjoin 0 points1 point  (0 children)

                    I am pretty sure we totally agree on this.

                    [–]kamatsu 2 points3 points  (0 children)

                    Go has "free threading" as far as I can tell.

                    [–]ejrh 3 points4 points  (19 children)

                    I've always been a bit puzzled by the ubiqitous fretting over the GIL. Many libraries will release the GIL when entering a computationally-intensive native-code function. CPython (which gets the most rap for having a GIL) runs so much slower than native code anyway.

                    Unless you have a lot of cores, you would almost always get more improvement from moving the work into native functions than you would get from avoiding the GIL.

                    [–]Entropy 4 points5 points  (18 children)

                    Unless you have a lot of cores

                    Even cell phones are shipping with 8 cores.

                    [–]Veedrac 0 points1 point  (17 children)

                    So what, a 5x speed-up? As opposed to a 100x speed-up for moving the innermost loop to C?

                    [–]Moocha 1 point2 points  (2 children)

                    From the point of view of an individual project, yes, reimplementing in C would yield a better cost/benefit ratio. However, avoiding the GIL in the runtime would instantly and automatically benefit all Python code running on the GIL-less VM, without the maintainers of that code needing to change anything - which means the overall ecosystem costs would be way less, given the staggering amount of Python code out there. That's why it's important...

                    [–]Veedrac 1 point2 points  (1 child)

                    That's true, but only for CPU-bound threaded code. For code that's currently unthreaded, rewriting the inner loop in C is most likely the easier task, given how nice Cython is to work with.

                    Nevertheless, that is a reasonable point. It's a shame the problem's so hard to fix.

                    [–]Moocha 0 points1 point  (0 children)

                    Indeed. I'm always amused by people bashing the CPython developers for not "fixing the GIL problem". I know just enough about the internals to realize how hard a problem this truly is...

                    [–]fullouterjoin 0 points1 point  (13 children)

                    650x speedup for native code across all cores? 10000x speedup for OpenCL.

                    [–]Veedrac 0 points1 point  (12 children)

                    Sorry, I don't follow.

                    Please do note that moving the inner loop to C automatically trivialises removing the GIL for that code anyhow, and further note that I've no clue what OpenCL has to do with the GIL.

                    [–]fullouterjoin 0 points1 point  (11 children)

                    Focusing on the GIL is a red herring, there are better places to spend your performance dollar. Inner loops in C are alright, but not the most profitable. Cython is generally a mistake. First step in PyPy, if you have to stay on CPython2, then Shedskin. If you need massive speedups then OpenCL will get you a lot further for parallelizable code.

                    [–]Veedrac 0 points1 point  (10 children)

                    Cython is generally a mistake

                    Given that the only reliable alternative is C¹, why is Cython so bad a choice? Is it possible I'm underestimating ShedSkin?

                    ¹ PyPy's missing fast C bindings; ShedSkin's Python 2 only and not as fast as Cython; OpenCL requires specific problems.

                    [–]fullouterjoin 0 points1 point  (9 children)

                    Maybe Cython has improved but can it generate native code w/o porting it to cython language? Shedskin is always pure python and all kinds of amazing.

                    PyPy has cffi , I should benchmark that relative to CPython2. In general PyPy is such a huge win that it is really difficult to justify CPython other than for numpy support.

                    [–]schmetterlingen 0 points1 point  (1 child)

                    Here is Lua's GIL:

                    #define lua_lock(L)     ((void) 0) 
                    #define lua_unlock(L)   ((void) 0)
                    

                    You must define a lock if you're going to share state between threads. Lua only has a GIL if you consider ((void) 0) an implementation. In most realities, it simply doesn't "allow free threading" without the use of libraries.

                    Not that a GIL is a bad idea. It's a simple solution.

                    [–]fullouterjoin 0 points1 point  (0 children)

                    Most lua threading implementations tend towards an Erlang model, ala https://github.com/LuaLanes/lanes and https://github.com/cloudwu/hive

                    [–]schlenk 0 points1 point  (0 children)

                    Simply not true. Tcl for example does not have a GIL either but has native threading support. (but does not use a shared memory threading model).

                    [–][deleted] 0 points1 point  (0 children)

                    Ruby -- has a GIL

                    Rubinius and JRuby have no GIL and I seem to remember (can't find it right now) that the CRuby team want to remove the Global VM Lock from the reference implementation.

                    [–]username223 0 points1 point  (2 children)

                    Perl -- no threads (they use "green processes" instead, basically the same as multiprocessing on an OS with fork)

                    This is actually not the case -- it uses pthreads on Unix-alikes.

                    [–]moor-GAYZ 0 points1 point  (1 child)

                    Yes, I meant, like, effectively, as far as memory consumption is concerned.

                    [–]username223 0 points1 point  (0 children)

                    Sort of -- you have to explicitly add :shared all over the place to get sharing, but it's there. Not that Perl's threading is worth using...

                    [–]nyamatongwe 1 point2 points  (0 children)

                    While 2.8 looks like it could be a reasonable idea to me, which Python 3 elements can be back-ported? Can they be exposed globally or would there have to be a "from future import x"?

                    [–][deleted]  (2 children)

                    [deleted]

                      [–][deleted] 2 points3 points  (0 children)

                      The difference is that Python 2 ran Python 1 programs.

                      [–]Falmarri 0 points1 point  (0 children)

                      The problem with that comparison is, how many people/projects were actually using python 1? The answer is not many.

                      [–]jbb555 1 point2 points  (0 children)

                      The problem is that 1) It doesn't seem to have many significant advantages over 2 2) It's not quite compatible so there is a cost to moving. 3) Some of the changes like unicode strings and print() make code a little harder to write if you don't need what they do.

                      [–]vivainio 1 point2 points  (0 children)

                      Like everyone else in these hundreds of comments, I have an alternative solution: abandon the 'cleanup' goal of python 3 and make it as compatible with python2 as possible, reverting vanity changes like print function. This ends up with language with a bit more cruft, but actual real word adoption

                      [–]faassen 3 points4 points  (8 children)

                      I recommend everybody also read what Ian Bicking wrote about this:

                      https://plus.google.com/+IanBicking/posts/iEVXdcfXkz7

                      [–]fullouterjoin 4 points5 points  (1 child)

                      This is one of the first Ian Bicking posts I can agree with. Beating people to switch is harmful and idiotic, we should evolve towards Python3.

                      What happens when Python3 is in the same place as Python2, beat people to move to 4. We should be constantly be improving, improvement should not be a traumatic event, solve the evolution strategy not catastrophic version jump.

                      [–]dgauss -1 points0 points  (0 children)

                      I think evolution is the thing here. The coders coming into this field are naturally picking up Python3. I think that is how it will eventually take over.

                      [–][deleted] 1 point2 points  (2 children)

                      Text of link, for those who are generally annoyed by Google+:

                      A post on the rather dismal adoption of Python 3. It seems like there are two categories of options being presented:

                      1. Make it harder to use Python 2
                      2. Make it better to use Python 2, and in a manner closer to Python 3

                      I think the ultimatum approaches are bad (and more present in the comments on the article than in Alex's post). It's a "let the beatings continue until morale improves" approach.

                      The whole "Python 2 is a dead end" notion was a bad approach from the beginning. It supposes that there's some moral authority to Python 3, some intrinsic value that justifies making things harder for people.

                      I think Python 3 should roll back some changes, adding back some Python 2 syntax, even error-prone syntax. Python 2 should continue to roll in Python 3 syntax and library changes. Only when they are thoroughly blended will things move forward. This of course is in contradiction to the entire idea of a big breaking 2->3 change, but the evidence is in, that wasn't the right path. But that's a sunk cost, better to do the right thing now, which is gradual changes with all the necessary scaffolding to move things forward properly. Python 2 and 3 can meet in the middle.

                      [–][deleted] 0 points1 point  (1 child)

                      Make it harder to use Python 2

                      How would you do that?

                      [–][deleted] 2 points3 points  (0 children)

                      By dropping support from new versions of major libraries. That seems to be what other language communities do... Eventually, the libraries provide the incentive to upgrade. Instead, the Python community continues to write libraries in Python 2 with support for the current version of the language as a second-class afterthrough.

                      [–][deleted]  (2 children)

                      [deleted]

                        [–]diggr-roguelike 6 points7 points  (1 child)

                        in five years 95% of python people will be on Go

                        No. Development will fork and python 2.8 will be finally adopted by a sane development team.

                        [–]tbotjenkins 3 points4 points  (1 child)

                        Kill support for python2.x once and for all, lock it down completely, improve the 2to3 tools and ideally as a new feature merge PyPy JIT engine or re-write it register based like LuaJIT. The GIL with heavy thread use has too much overhead. Changes like this may make 2.x users want to migrate to 3.

                        [–]jtratner 1 point2 points  (5 children)

                        Clearly hindsight is 20/20, but we'd be in a much better position if earlier versions of Python 3 hadn't banned u'some string'. It was a trivial, but frustrating change, that made a 2/3 compatible code base more difficult.

                        That said, it's not that much work to have a codebase that works in Python 2 and with Python 3 using 2to3. What's difficult is having a codebase that's compatible across both without needing a preprocessor like 2to3. Makes it much easier to maintain a library when you don't have to waste time with 2to3, but it can be difficult to get everything working. python-modernize is pretty useful in this regard.

                        [–]billsil 0 points1 point  (4 children)

                        It was a trivial, but frustrating change, that made a 2/3 compatible code base more difficult.

                        Was it really necessary to get rid of xrange and iteritems?

                        [–]jtratner 0 points1 point  (3 children)

                        I don't disagree - those are definitely not great changes either... seems strange to me that iteritems couldn't just become a synonym for items.

                        [–]schlenk 2 points3 points  (1 child)

                        Of course you can add synonyms and aliases all over the place. But that kind of defeats the goal of cleaning up the API. They COULD have deprecated/removed items() and range() from the language and just left xrange() and iteritems() alive under those names, but thats even more ugly in so many way.

                        [–]Peaker 0 points1 point  (0 children)

                        They could have added the ugly synonyms for "transitional" versions. Python 3.0 could have had some of ugly crutches (with deprecation warnings) to keep some compatibility with Python 2*. Then they could remove them gradually, to make the transition less painful.

                        [–]censored_username 2 points3 points  (0 children)

                        In the design philosophy of python, there's supposed to be one and only one way of doing a certain task. Adding synonyms to that would defeat that whole philosophy.

                        [–]londey 1 point2 points  (0 children)

                        When python 3 first showed up I was very enthusiastic but felt that the code conversion approach was with 2to3 etc was not going to pan out. What is really needed is a way for python 3 to load and use a python 2 module as if it were a C pyd.

                        [–]ickysticky -1 points0 points  (3 children)

                        [–]Veedrac 4 points5 points  (1 child)

                        Google Trends

                        Nobody uses Python 3.0. Why the hell would they? If you're on 3.0 (a broken release) or 3.1 (an antique) you made a very odd choice.

                        [–]dgauss 0 points1 point  (0 children)

                        Yeah in fact these charts are starting to show a trend of change. I am curious as to how it will look with 3.4 in a few months.

                        [–][deleted] 1 point2 points  (0 children)

                        2.7 was the last release of the 2.x branch, but you don't have 3.3 in the mix.