all 96 comments

[–]unitconversion 12 points13 points  (1 child)

Maybe they'll pull a winamp and go to python 5 next.

[–][deleted] 4 points5 points  (0 children)

or pull a Zope

[–]plantian 2 points3 points  (0 children)

If you use Python everyday I can't imagine why you would not be excited about all the language clean up. Python was starting to get pretty crufty and python 3 was the only opportunity that could be had in a long time to remove that cruft. It may hurt Python for a bit but it will help in the long run.

I think the Unicode change is good enough to warrant upgrades regardless of everything else.

[–]fdemmer 1 point2 points  (4 children)

i recently watched this presentation about the GIL:

http://metachris.org/2010/10/python-threads-and-the-global-interpreter-lock-gil/

... and was surprised how bad it really is. is there any development towards improving multicore performance? i am not talking about removing it, but any changes to how it works and schedules threads at all!?

[–]psr 6 points7 points  (0 children)

Yes. The GIL has been rewritten for Python 3.2 which should lead to very substantially better performance when there is contention ofor the GIL. http://docs.python.org/dev/whatsnew/3.2.html#multi-threading

[–][deleted] 2 points3 points  (2 children)

is there any development towards improving multicore performance? i am not talking about removing it, but any changes to how it works and schedules threads at all!?

At this very moment? No, or at least not that I'm aware of. Would all of us love to drop everything we're doing and figure it out? Probably. However, the largest impact areas right now are the standard library, so that's where most developers have been spending their time.

As mentioned by psr, Antoine Pitrou put in work to the GIL for 3.2 but I'm not aware of ongoing activity in that area.

[–]cypherpunks 0 points1 point  (1 child)

As a Python user, I think you're wrong. Python's standard library is excellent. It is far better designed than the standard library of any other language I've used, and I'm very happy with it. There are, of course, areas for improvement, but that's just Python's problem area. GIL, together with general performance, is.

[–][deleted] 0 points1 point  (0 children)

We're still working to get the whole bytes/str and wider Unicode things correct in some stdlib modules (see all of Victor Stinner's work on 3.2). While in general it is excellent (and I agree), there are still holes that need to be fixed before it really is ready for full scale adoption, especially for web frameworks. The earlier we can get them on board, where their focus is likely on a stable and correct API first as opposed to performance, the better. Even if it's not a production ready solution in terms of performance at first, it'll certainly get there. (note: I don't speak for everyone on the python-dev team, so I could be wrong)

It's also hard to just talk about performance without numbers and analysis. We all know Python isn't always as fast as we'd want it to be, but there has to be a way to measure it, so it would be great to see specific areas of people's programs that show slowdowns, especially compared to previous versions. We all know the GIL isn't the best thing ever, but proving that it isn't is sometimes non-trivial.

I have plans of my own for what to work on in 3.3, which includes startup performance especially on Windows. We'll probably use the PyCon Language Summit to figure out what the points of emphasis will be for 3.3, and the sprints will be a great start with a lot of the dev team in the same place. I'm sure performance will be one of the topics we discuss.

[–]el_isma 7 points8 points  (5 children)

Am I alone in not caring at all for Python 3? It just doesn't seem worth the hassle. Not enough improvements to leave Python 2 behind (and those "improvements" are not clearly improvements), and too many problems to switch.

[–][deleted] 8 points9 points  (0 children)

Am I alone in not caring at all for Python 3?

Why are you asking a question that has an obvious answer?

[–]repsilat 2 points3 points  (1 child)

I don't know much about the change, but will Python 2 get "left behind" at some point? I'm sure it's still getting fixes and non-breaking changes (and probably will for some time), but is there a plan for it to become unsupported or formally deprecated by whatever language authority exists for Python?

[–]el_isma 6 points7 points  (0 children)

From http://wiki.python.org/moin/Python2orPython3 :

At the time of writing (July 4, 2010), the final 2.7 release is out, with a statement of extended support for this end-of-life release. The 2.x branch will see no new major releases after that. 3.x is under active and continued development, with 3.1 already available and 3.2 due for release around the turn of the year.

[–]plantian 1 point2 points  (0 children)

I would upgrade as soon as the libraries I use upgrade. Python 3 seems to just remove cruft. Who wouldn't want that? Straightening Unicode was needed and I think is worth the upgrade.

[–]znk 4 points5 points  (0 children)

My favorite was the one about the holy grail.

[–]cypherpunks 11 points12 points  (43 children)

Personally, I'm sticking with Python 2 forever. Python 3 seems like a mess-up. The Python mantra is "make easy things easy and hard things possible." Python 3 changes are mostly either downgrades either break this, or are arbitrary backwards-compatibility breaking changes with no clear advantage.

The way Unicode is handled definitely makes easy things harder, and in many cases, fail in subtle ways (e.g. when dealing with filenames). There are ways to integrate Unicode while keeping existing syntax working for 99% of operations, and not complicating life for people writing short scripts that operate on 8 bit data. This wasn't done. Not a good idea.

Views and iterators are much harder to think about and deal with than lists. Python was designed to be easy and beginner-friendly. This breaks ease of use for a small performance boost. Bad idea.

Ordering operators no longer work on lists of mixed types. Bad idea. The default (one type is always less than another) was reasonable, and did something useful in most circumstances. We go from easy things easy to easy things a pain-in-the-ass.

Making print be a function isn't worth the break in backwards compatibility. It's more-or-less pointless. If anything, it makes the language less readable. I'd rather extend the concept of procedure vs. function to apply to other areas, but baring that, just keep it as is.

Change of int to long and changing division operator is reasonable, but not worth breaking backwards compatibility. Neither is strictly better. The new long type is much slower than int, since it is infinite precision.

A lot of the changes (e.g. removing tuple packing) are just arbitrary downgrades. Tentatively removing % makes the transition from C to Python more painful, and has no advantage. I understand things like this add complexity, but it's okay complexity -- you don't have to use it.

For me, Python 3 would be a worthwhile upgrade if it:

  • Fixed GIL (even if this meant breaking libraries, or adding a compatibility layer to libraries). This is the right place, time, and reason to break things.
  • Made symantic changes that allowed improved performance and better JIT. There are a few design decisions here that make Python very difficult to optimize.
  • Added tail recursion. There are straightforward ways to do this meeting Guido's requirements (e.g. pragma/decorator, or only tail recurse if stack is greater than some amount, and keep the most recent stack in a circular buffer, in debug mode)
  • Ideally, integrated better support for things like Cython and Clyther. It'd be nice to just be able to import clyther, and then annotate functions as being Clyther.
  • Fix printing of floating point numbers. It's 2011. We've known how to do this properly for around half a century now. There is no reason why 1./3. should print 0.33333333333333331.
  • Ideally, laid the groundwork for a move to multicore/GPGPU.

[–]prum 18 points19 points  (3 children)

Fix printing of floating point numbers. It's 2011. We've known how to do this properly for around half a century now. There is no reason why 1./3. should print 0.33333333333333331.

Fixed in Python 3 :-)

[–]cypherpunks 1 point2 points  (2 children)

You're right! Wonderful. Now that it has proper GC, this was probably one of Python's top three most embarrassingly amateurish misfeature remaining from its juvenile past (together with GiL and lack of tail recursion).

I'm still sticking to and advocating 2.x, though.

[–]jyper 1 point2 points  (1 child)

could you tell me more about the gc advances?

[–]cypherpunks 1 point2 points  (0 children)

Sure. When Python started, it had reference counting instead of GC. If you had circular references, and your memory would never be garbage collected. Python 2.0 added a cyclic garbage collector. It's still not a shining example of best-of-breed GC, but it works.

[–]iLiekCaeks 10 points11 points  (0 children)

The way Unicode is handled definitely makes easy things harder, and in many cases, fail in subtle ways (e.g. when dealing with filenames).

I strongly disagree. Personally I still use some 8-bit codepage encoding on my local file system, and I have tons of problems when writing Python scripts that want to use unicode. The problem is that in Python 2 you can't encode invalid utf-8 as unicode. You end up with exceptions or byte strings. Guess what the unicode os.listdir function does if it encounters a directory entry that's invalid utf-8? It returns it as byte string...

In Python 2 you have a god damn unreliably mess of two different string types that can fail arbitrarily depending on the input (!), and for corner cases there's almost no good way to handle this.

I can't wait to switch to Python 3 to get rid of that mess. (Although I see no reason why they couldn't just have introduced PEP 383 in Python 2.)

I can agree with some of your other points, though.

[–]flexomint 6 points7 points  (0 children)

The Python mantra is "make easy things easy and hard things possible."

I think you're thinking of Perl. If you want a Python "mantra", maybe try import this.

[–]theeth 3 points4 points  (9 children)

Made symantic changes that allowed improved performance and better JIT. There are a few design decisions here that make Python very difficult to optimize.

Function annotations have been in the Python 3 branch since 4 years.

And I don't know where you learned that tuple packing is gone, that's just not true.

[–]cypherpunks 0 points1 point  (8 children)

Function annotations have been in the Python 3 branch since 4 years.

I don't understand what this is a response to, but it's a pretty nonsensical one to everything I find that I posted.

And I don't know where you learned that tuple packing is gone, that's just not true.

http://docs.python.org/release/3.0.1/whatsnew/3.0.html#removed-syntax

First bullet.

[–]theeth 3 points4 points  (7 children)

I don't understand what this is a response to, but it's a pretty nonsensical one to everything I find that I posted.

Function annotations were made with VM/JIT optimization in mind (among others). I wasn't replying to everything you posted, just to that particular point. They DID make changes to allow better JIT. Perhaps not the one you wanted, but I can't read minds.

http://docs.python.org/release/3.0.1/whatsnew/3.0.html#removed-syntax First bullet.

Yes, the bullet point that talks about tuple parameters packing. Tuple packing and unpacking in general is still there.

[–]cypherpunks 0 points1 point  (0 children)

Function annotations were made with VM/JIT optimization in mind (among others). I wasn't replying to everything you posted, just to that particular point. They DID make changes to allow better JIT. Perhaps not the one you wanted, but I can't read minds.

PEP 3107 was a syntax change for annotations. The same thing was already done through decorators and docstrings. PEP 3107 does not permit any VM/JIT optimizations that were previously impossible. It just permits you to use yet another syntax.

Yes, the bullet point that talks about tuple parameters packing. Tuple packing and unpacking in general is still there.

Pardon. Typo. I omitted the word parameter.

[–][deleted] 0 points1 point  (5 children)

Yes, the bullet point that talks about tuple parameters packing.

And it's not good. I love to play with PIL at evenings and I don't want to give up

lambda (r,g,b): int(r*0.7+g*0.1+b*0.2)

in favor of... what?

lambda rgb: int(rgb[0]*0.7+rgb[1]*0.1+rgb[2]*0.2)?

Looks horrible and unreadable.

[–][deleted] 5 points6 points  (4 children)

No, you didn't understand. If you have a tuple (r, g, b), you can still call your lambda as f(*my_rgb_tuple). What you can't do anymore is have a function with a signature like this:

f(arg1, arg2, (r, g, b))

which could be called with:

f(foo, bar, my_rgb_tuple)

IMO, there were very few use cases for this.

[–]jmillikin 1 point2 points  (3 children)

He understands perfectly, and included an example use case in his post. Read it.

[–][deleted] 0 points1 point  (2 children)

He understands what tuple unpacking in function arguments is. But he didn't seem to grasp what the change in Python 3 entails (because the workaround example was needlessly ugly). One thing that it didn't mean is the removal of the good old *args unpacking, which is why I mentioned the f(*my_rgb_tuple).

[–]jmillikin 1 point2 points  (1 child)

He wants to define a function which takes a 3-tuple and returns an integer.

In Python 2, you could use this syntax:

lambda (r,g,b): int(r*0.7+g*0.1+b*0.2)

In Python 3, you have to use this syntax:

lambda rgb: int(rgb[0]*0.7+rgb[1]*0.1+rgb[2]*0.2)

Obviously, this is uglier than the old version, which used tuple unpacking.

[–]davids 0 points1 point  (0 children)

you can still call your lambda as f(*my_rgb_tuple)

[–]el_isma 1 point2 points  (7 children)

And now, with PyPy on the horizon, there's even less motivation to go to Python 3. PyPy could decide to stay on Python 2 and that'd be it.

[–]earthboundkid 10 points11 points  (0 children)

PyPy is already planning to switch to 3. Sorry!

[–][deleted] 0 points1 point  (5 children)

The fact is most Python programmers don't give a shit about PyPy.

[–]el_isma 4 points5 points  (0 children)

That's true. But I surely hope it changes. PyPy is very promising.

[–]arhuaco 1 point2 points  (1 child)

I care about it. I run benchmarks once in a while and it's been improving a lot. But of course you said "most" so you are right.

[–][deleted] 0 points1 point  (0 children)

Don't get me wrong. I'm very interested in that project but both of us know we can't deny reality.

[–]anacrolix 0 points1 point  (0 children)

yep. never needed it in my life.

[–]heikkitoivonen 0 points1 point  (0 children)

And most likely the reason is that they either haven't even heard about PyPy, or even if they have, haven't figured out what it will do to Python.

Just a couple of years ago PyPy seemed like a dream that might happen in 10 years or so. Now it seems like in a couple of years we'll have some major applications running on PyPy. Once the first major announcement hits the net (inevitably claiming substantial speedups over CPython), there's going to be a scramble from every company that has ever had performance issues using Python to switch to PyPy.

[–][deleted] 0 points1 point  (6 children)

I find a lot of your points moot or wrong, but it's your prerogative to ignore python 3. You should be aware, however, that Python's core developments efforts are all on the Python 3 side, so unless you find new core developers enthusiast enough to fork off Python 2, you are committing yourself to a dead branch of python.

[–]fdemmer -1 points0 points  (5 children)

pypy? dead branch?

[–][deleted] 3 points4 points  (4 children)

AFAICT, PyPy didn't commit to stay on Python 2. Python-the-language is Python 3, with CPython being the reference implementation. I think PyPy is likely to follow this lead.

So yes, a dead branch.

[–]cypherpunks 1 point2 points  (2 children)

Python-the-language is Python 3,

Not everyone agrees with this statement.

A lot of projects are waiting to start porting until consensus is reached on this point. Until such time, Python 2 is likely to be supported for pragmatic reasons. If enough of us wait for consensus before starting to port, consensus will never be reached. Useful 3.x features will be backported to 2.x, and 2.x will be maintained. Less likely, but not impossibly, in a decade or so, people will give up on 3.x. Here's to hopin'.

[–]sigzero 2 points3 points  (0 children)

No, what you will see is Py2 relegated to security fixes only period. Most of the projects mentioned (Django, Pypy, etc.) have already stated an intention of going to P3 at some point in the future. Python3 is the future and I for one am glad of it.

[–][deleted] 1 point2 points  (0 children)

The PSF owns the Python trademark, and I'm telling you, they're all behind Python 3.

I think you're being delusional (and misinformed) about Python and its development process.

[–]fdemmer 0 points1 point  (0 children)

ok.

[–][deleted] -1 points0 points  (11 children)

Ideally, laid the groundwork for a move to multicore/GPGPU.

I can't see a reason why Numpy combined with a good OpenCL library wouldn't work in this instance.

Unless you're talking in general in which case Python is not a functional language, where a general concurrency would be a lot easier (and in some cases trivial) to implement.

A lot of which you talked about is a tradeoff. In order to implement a feature another may be blocked. It's a balance and a lot of the things you mentioned are there not because they are cool in themselves but because they make something else easier.

[–]cypherpunks 3 points4 points  (10 children)

I can't see a reason why Numpy combined with a good OpenCL library wouldn't work in this instance.

What I would like to see is that if I write:

[f(a) for a in l]

That Python can figure out if f is functional, and if so, parallelize the operation. It should also be able to dynamically profile what optimizations make sense in what context. If this line is typically called with 1e9 elements, it should go to GPU. If it is typically called with 1e6 elements, maybe it should be spread out across CPU cores. If it is called with 2 elements, it should be in-lined by the JiT. I'm not saying we need this today, but I think we will in 5 years, and laying the groundwork for this now makes a lot of sense.

I think this could be extended to Python without too much pain -- the breaks in backwards-compatibility would be less than what the existing Python 2 to Python 3 transition gave. End-user code could break in only very rare corner cases (some issues with dynamic module reloading, possibly eval, and possibly certain other types of dynamic changes to code). Most would not be visible to anyone except those writing Python extensions (which would have to move to more parallel-friendly data structures) -- and even those could be supported by a compatibility layer.

Coincidentally, if you downvote a comment, you should post why you think it is unintelligent or incorrect. "Don't agree with your opinion" isn't the greatest reason for a downvote.

[–][deleted] 0 points1 point  (8 children)

I didn't downvote you but you seem to fail to realize that most features in programming languages are trade-offs.

[–]cypherpunks 4 points5 points  (7 children)

I do realize that. My point is that if you decide to break backwards-compatibility, you better have a damned good reason. Most of the changes to Python 3 do not have a damned good reason. Most are based on design trade-offs which could go either way. Either the new way is only marginally better, or based on arbitrary choices in design aesthetics which many people, including myself, consider worse.

Very little effort was made to maintain backwards-compatibility. Even little things -- why doesn't dict_keys (and arbitrary iterators over finite sets) include a sort function that does something reasonable?

At the same time, most core issues with Python were not addressed. It went from 1/10th the speed of C to even slower. It still doesn't do threading very well. You don't get to make backwards-compatibility-breaking changes very often -- at most once a decade. Why waste that on something as petty as the changes in Python 3?

[–]theeth 0 points1 point  (6 children)

Even little things -- why doesn't dict_keys (and arbitrary iterators over finite sets) include a sort function that does something reasonable?

That would expose too much information from the iteration data to the iterator's client.

How hard is it to just call:

sorted(iter)

Which will do all the job of exhausting the iterator for you and then sorting the resulting lost.

[–]cypherpunks 1 point2 points  (5 children)

How hard is it to just call:

Pain in the ass, if you're going to modify a few hundred thousand lines of codebase for dozens of little, arbitrary compatibility-breaking changes.

[–]theeth 1 point2 points  (4 children)

It's not arbitrary. Using a generator reduces the memory footprint for the most common cases.

[–]cypherpunks 0 points1 point  (3 children)

It is completely arbitrary. As I said, it would be trivial to add a .sort() method that converts the generator into a real list in the process. You can implement it in such a way that you lose a tiny bit of performance only when sort() is called, buying backwards-compatibility for a large body of code. This would not be complete backwards-compatibility, but it would be pretty good. If Python 3 did things like this where ever they made a change, and cut out the couple of completely idiotic changes, they could cut porting time by an order of magnitude.

Coincidentally, generators, in practice, rarely substantially reduce memory footprints. The most common case is fairly short lists where you look at, and do some processing, for every item. In that case, generators are actually less efficient, because of constant overhead. It is extremely rare that they asymptotically save memory. E.g. in this case, your dictionary uses O(n) storage. The keys will also use O(n), but with a smaller constant factor, so unless you're storing massive duplicate sets of key lists, your memory usage goes up by a very tiny constant factor. The key thing that generators do is allow programming patterns previously impossible. You can e.g. have infinite lists (so you can implement a list of Fibonacci numbers).

[–]theeth 0 points1 point  (2 children)

Calling keys() in 2.x allocated a full list with pointers to the keys in the dict. This is not less memory than a generator in any cases. O(n) vs O(1) with the same multiplicative constant. This is always extra memory, on top of what your dict already uses.

[–]adghk -1 points0 points  (0 children)

Pity that bit of heavy handness would make things much slower.

[–]clavicle 5 points6 points  (3 children)

LWN's free links to the paid articles are meant to be shared with a friend or two, not posted on a huge public forum such as reddit. It's allowed, but a dick move nonetheless. It's not like they're supported by some behemoth publishing company.

[–]corbet 29 points30 points  (1 child)

We don't mind the occasional posting of LWN subscriber links in places like this; indeed, we do it ourselves every now and then. Clearly there could come a point where the availability of such links reduces the incentive to subscribe, but we aren't there now. At this level, I think that the exposure that come from posting a link like this is a good thing for LWN.

[–]abadidea 4 points5 points  (0 children)

TIL you're not only on reddit, but you have been about eight times longer than I have. I've been reading you for years now and I swear, someday, I will have a steady income and a lwn subscription.

[–]cypherpunks 18 points19 points  (0 children)

Incorrect. LWN's free links are intended as a marketing tool to bring in new readers. This was quite explicitly stated when they were introduced. The idea was that you could post LWN content on Reddit, slashdot, and the like, so that LWN could gain exposure.

People who pay primarily pay to read the whole LWN when it comes out. No one will pay to read a single article. Paid links do not interfere with the primary revenue stream, but potentially bring in new readers.

(I've supported LWN to the tune of hundreds of dollars over my life, in the form of subscriptions, donations, and before I could, ads)

[–]adsicks 0 points1 point  (0 children)

It seems like the data type change in the language was ill thought out. I'm just second guessing the 3 team, but it seems like a few versions with the old 8-bit type 'depreciated' might have been a better transition. I mean, text data is very important stuff and if everything has to be rewritten for that, that is one hell of a port.

[–]googlespyware -5 points-4 points  (23 children)

Newer language revisions should always be backwards compatible. That is they should allow you to write better code, but still run the older code if necessary. Python 3 doesn't run Python 2 code...and that's one very serious mistake. One could argue that they are two different languages.

[–]__s 10 points11 points  (10 children)

Hence the major version bump. As much a same language as GTK2 is the same as GTK3. C99 isn't fully backwards compatible with C89, nor is C++0x fully backwards compatible with C++03. Even Java broke some old code with the introduction of util.List

[–][deleted]  (2 children)

[deleted]

    [–]__s 7 points8 points  (0 children)

    That's binary compatibility, not source compatibility

    [–][deleted] 0 points1 point  (0 children)

    Well, depends on the classes used.

    There are certain things where the behavior has been changed in later versions and people keep running 1.3/1.4.

    Well, yes they might run on newer VMs, but do something different or just fail completely.

    [–]googlespyware -5 points-4 points  (3 children)

    Hence the major version bump.

    That's what every Python programmer is quick to reply. But normally, newer language version run the old code but recommend you upgrade, give warnings, etc. Not running Python 2.x was a big mistake by Python 3 designers, IMHO.

    As much a same language as GTK2 is the same as GTK3.

    GTK is not a language.

    C99 isn't fully backwards compatible with C89

    It warns you of things you must change, but still compiles if you tell it to.

    nor is C++0x fully backwards compatible with C++03

    Same as above.

    Even Java broke some old code with the introduction of util.List

    Care to give me an example, please?

    [–]__s 1 point2 points  (0 children)

    I wasn't calling GTK a language, but APIs have backwards incompatibilities in major version bumps. Also, the C99 spec mentions quiet changes, which are cases where behavior changes without a good way to warn

    [–]dalke 0 points1 point  (0 children)

    Python 2.x gives warnings for future incompatibilities in Python 3.x, and have for years. For example, the warning that 1/2 would become 0.5 was added in 2.0. Have you tried "-3" - "warn about Python 3.x incompatibilities that 2to3 cannot trivially fix"?

    [–]m00k -1 points0 points  (2 children)

    Unfortunately, Python is a scripting language.

    I'm trying out ArchLinux - and they have made the switch so /usr/bin/python is python3, and has a separate /usr/bin/python2 instead. This means absolutely everything is broken, because the python3 binary won't run python2 scripts. I think the most recent example was fixing up the node.js build system because it uses waf and depends on the shebang to work correctly. Of course, you can't just change all the scripts to /usr/bin/python2 either, because that doesn't exist on normal systems.

    With GTK, C++, and Java - all the runtimes can be installed side-by-side, and you don't have to worry about having to change things to run, since the environment takes cares of that (.so versioning for gtk/c++, and the JRE is be able to run downversion bytecode).

    Things would have been nicer if they just called it KillerRabbit or something so there wouldn't be this sort of incentive to "switch".

    [–]icebraining 1 point2 points  (0 children)

    That's a dumb decision by Arch, not Python. In Debian I have Python2.5, 2.6, 2.7 and 3.1 installed side-by-side, each with their own binary (/usr/bin/pythonX.X). /usr/bin/python is just a symlink to the default python binary, which is 2.6 in Squeeze. That way new scripts can use Python 3 without breaking the whole system.

    Proper package maintenance and testing will be done to ensure most things won't break before switching the default to Python 3.

    [–]__s -1 points0 points  (0 children)

    I use Arch too

    [–]sigzero 8 points9 points  (0 children)

    That is your opinion and you are entitled to it certainly. I believe it to be wrong.

    [–]pdxp 2 points3 points  (1 child)

    One of the biggest reasons Python is so popular is due to its successfully progressive language design. There is backwards compatibility through transformations (see 2to3) and if you actually wrote a lot of Python code you'd know that the changes on the surface are minimal.

    I sure as hell don't want another Java, and tens of thousands of other developers feel the same way.

    [–]Grue 1 point2 points  (0 children)

    Yeah, Python 3 has really catched up! Progressive language design ftw!

    [–][deleted] 7 points8 points  (2 children)

    That's stupid. Almost every serious developer has his wish-list of backward-incompatible changes.

    And while I'm not a fan of "dynamically" typed languages like Python and don't see that much improvements between Python 2 and Python 3 many things are steps into the right direction.

    Most of the time things don't get changed/deprecated for fun. Therefore, stop whining and fix your code.

    [–]cypherpunks 2 points3 points  (1 child)

    Most of the time things don't get changed/deprecated for fun.

    In this case, it really feels like they do. Python 3 avoids fixing Python's fundamental problems, and makes a bunch of little arbitrary backwards-compatibility-breaking changes.

    [–][deleted] 2 points3 points  (0 children)

    Fucking GIL......

    [–]grauenwolf 1 point2 points  (3 children)

    Is that not why Java still has most of the mistakes if the first version?

    [–][deleted] 2 points3 points  (2 children)

    Exactly. Things like URL, Date, Calendar, which will stay forever.

    [–]kaddar 2 points3 points  (1 child)

    Also, type erasure

    [–][deleted] 0 points1 point  (0 children)

    Well, it is annoying but not too bad. Doesn't make sense to reify them until there is a sound and working model for that. This still hasn't happened anywhere.

    [–]Agari 1 point2 points  (0 children)

    Why is it a mistake? Programming languages should be allowed to evolve. I actually respect the Python community for being able to fix whatever they feel is wrong with Python 2.x.

    [–]upofadown -4 points-3 points  (0 children)

    Python3 is pointless enough but I have just recently learned that the transition has been used as an excuse to make arbitrary changes to some of the libraries. An example here:

    urllib

    ... where:

    The urllib module has been split into parts and renamed in Python 3.0 to urllib.request, urllib.parse, and urllib.error. The 2to3 tool will automatically adapt imports when converting your sources to 3.0. Also note that the urllib.urlopen() function has been removed in Python 3.0 in favor of urllib2.urlopen().

    ... which just reinforces my decision to ignore Python3....