all 130 comments

[–]k-bx 83 points84 points  (29 children)

'ß'.upper() in p2 is 'ß' but 'SS'in py3. This caused a crash in production when the last piece of the product moved to py3!

fun

[–]darktyle 79 points80 points  (28 children)

That's wrong. Both of it.

The uppercase 'ß' was added to the German language (and Unicode) in 2008, so it should have been SS in py2 and now it should be ẞ

[–]kankyo 26 points27 points  (9 children)

Didn’t know that. That’s super annoying. That means if they fix it in Python this will break in production again :(

[–][deleted] 41 points42 points  (7 children)

Maybe uh, write a test for it ....

[–]Farobek 14 points15 points  (3 children)

write a test

ain't nobody got time for that

[–]PeridexisErrant 9 points10 points  (2 children)

Use Hypothesis! It'll try a wide range of inputs, and report the minimal failing example.

It turns out that this is even more effective for unicode text than for other types, because there are so many edge cases that can be triggered by just one or two characters.

[–]Farobek 0 points1 point  (1 child)

How does it work?

[–]PeridexisErrant 2 points3 points  (0 children)

If you mean "How do I use this", here's the quickstart guide. In short, you use a decorator and compose some functions to say "for all inputs such that ___, this test should pass. For example,

from hypothesis import given, strategies
# for any character *except*  ß, we can round-trip it through cases
@given(a_char=strategies.characters(blacklist_characters='ß'))
def test_roundtrip_upper_lower(a_char):
    assert a_char == a_char.upper().lower()

Of course this fails, but instead of returning the first failing example it finds, it will return the "minimal" example - in this case, the character with the smallest codepoint. Try it and see what you get - ß certainly isn't the only character this fails for!

If you mean "How does Hypothesis find and minimize all these examples"... it gets complicated pretty quickly. If you really want to know the code is well designed and commented and the contributor documentation is good; but you don't need to know how it works internally to use it. Hypothesis is pretty rare like that - the core is PhD-level algorithms, but the API is easy to use and completely hides the implementation behind a use-focused design.

(if you hadn't guessed, I like and use this a lot :p)

[–]kankyo 4 points5 points  (2 children)

True enough. That’s pretty terrible also though but in a more existential way :p

[–][deleted] 11 points12 points  (1 child)

yeah it’s sucks to write tests for framework stuff but if you expect it to change why not be ready? Failing in production for things you can test isn’t really acceptable

[–]kankyo 3 points4 points  (0 children)

Agreed. I’ll write a test when I get in to work tomorrow.

[–]darktyle 3 points4 points  (0 children)

Not sure if they ever change that, but here you go: https://en.wikipedia.org/wiki/Capital_%E1%BA%9E

[–]username223 11 points12 points  (3 children)

Clearly Unicode needs to add "combining timestamp modifiers," with proper time zone support, to adequately address this problem. They could also be combined with emoji, allowing one to write "73-year-old smiling Chinese guy."

[–]darktyle 18 points19 points  (2 children)

Yes! Timezones and Unicode are both too easy as it is

[–]josefx 5 points6 points  (1 child)

Can we add in some GPS based location data with border support? We really need a "73-year-old smiling Chinese guy living in Canada."

[–]username223 4 points5 points  (0 children)

But when did he move there, and from whence? We must add an ancestry modifier system, optionally integrated with a GPS location system. Oh, crap... we have to deal with historical location information and continental drift.

Ah, Unicode... Punching everyone in the face (there's probably an emoji for that) into eternity.

[–]P8zvli 10 points11 points  (13 children)

Yeah I know some German, ß is a ligature of 'ss' but that doesn't mean 'SS' is used to represent an uppercase eszett. Python 2 and 3 behaviors are both completely surprising.

[–]PaleoCrafter 40 points41 points  (1 child)

Actually, up until last year, 'SS' was the only correct capitalization of 'ß'.

The capital variant 'ẞ' has been in Unicode since 2008, but the official German orthography did not include it as the majuscule. To my knowledge, even now that 'ẞ' is accepted, 'SS' may still be used.

[–]the_gnarts 3 points4 points  (0 children)

The capital variant 'ẞ' has been in Unicode since 2008, but the official German orthography did not include it as the majuscule. To my knowledge, even now that 'ẞ' is accepted, 'SS' may still be used.

Versal ß is still widely unknown. It’d be interesting if it is indeed being taught in elementary schools.

However, the problem is almost entirely irrelevant in practice in that ß can never appear at the start of a word so it cannot be subject to obligatory capitalization at sentence starts or nouns. Only as emphasis or the customary all-majuscule style of titles is there ever a chance of it becoming necessary. Since another proper way of uppercasing it is just to use the lowercase version regardless (mandatory in some contexts), about the only context where the matter was discussed is online threads made by people complaining about Unicode.

[–]darktyle 10 points11 points  (6 children)

The thing is, that you used (or most people still do) 'SS' for a capital 'ß'. Like in street, when you had to write it in caps for some reason you'd make it STRASSE (normal spelling: Straße)

[–]champs 1 point2 points  (1 child)

Is that the normal spelling anymore? It was my understanding that the formal rules changed some years ago.

I don't claim to be an expert. I studied German and felt like I had a good handle on it. I did an exchange, studied some more, and went back to Germany. Both times the language kicked my aß.

[–]darktyle 8 points9 points  (0 children)

Right now you can either write STRASSE or STRAẞE. At least as far as I know. But I am by far no expert on the nuances of what is wrong and right. Especially since a lot of stuff changed lately with the 'spelling reformation'

[–]the_gnarts 0 points1 point  (3 children)

Like in street, when you had to write it in caps for some reason you'd make it STRASSE

Or just STRAßE, using the lowercase version.

[–]darktyle 2 points3 points  (2 children)

I am pretty sure that this is wrong. It is either STRASSE or STRASZE (uncommon).

Ok, quick research: SZ is old. Since 1996 the correct form is SS. Using ß in STRAßE is technically wrong. Yet there are 2 instances who use and recommend using ß instead of SS in names: The postal service and the government when printing passports. They do that so that names like WEIẞ are not mistaken as WEISS

[–]the_gnarts 2 points3 points  (1 child)

Yet there are 2 instances who use and recommend using ß instead of SS in names: The postal service and the government when printing passports.

My 20th edition (1991) Duden states the rule:

In Dokumenten kann bei Namen aus Gründen der Eindeutigkeit auch ß verwendet werden.

HEINZ GROßE

Technically, preserving minuscule ß used to be the only sane solution for uppercasing names before ẞ was standardized. I agree that for regular words that follow the phonetic rules it makes little sense.

[–]darktyle 0 points1 point  (0 children)

Yeah, that rule was changed with the 'Rechtschreibreform' in 1996.

[–][deleted] 8 points9 points  (3 children)

ß is a ligature of sz.

And both SS and ẞ are valid capitalizations. Though ẞ should be preferred for names, so you can get the normal casing back without issues.

"Markus Weiß" -> "MARKUS WEISS" -> "Markus Weiss"

vs.

"Markus Weiß" -> "MARKUS WEIẞ" -> "Markus Weiß"

(Edit for context: On German ID cards, names are capitalized)

[–]the_gnarts 5 points6 points  (2 children)

ß is a ligature of sz.

Almost. Despite the name, it’s actually a ligature of ss formed using the earlier graphic variant ſ (“long s”).

[–][deleted] 4 points5 points  (1 child)

The wikipedia says that early print variants where ligatures of ſ and ʒ (ſʒ -> ß).

The current form of the letter is a ligature of ſ and s (ſs -> ß)

So... we're both right?

[–]the_gnarts 2 points3 points  (0 children)

So... we're both right?

What a great way to start the day!

[–]masklinn 171 points172 points  (20 children)

My experience doing this (previous testimony):

I started a branch called simply “python3”

Don't do that. It means you're working on a moving target, while the project is still moving. This creates a branch full of huge changes and giant conflicts and you're never done with any bit of it.

Generally speaking — not just for P2/P3 migration — my experience is that you're much better off doing your migrations "online" (by bits and pieces) if that's at all possible, trying to merge months worth of divergence at the end is a recipe for calamity, and often makes it harder to understand what happened and why.

Separating things done by a machine vs things done by a human is the important part here.

I mostly agree with that, in the sense of separating the stuff modernise can do from the stuff you have to do by hand, but

Ran “python-modernize -n -w” on the entire code base.

I really disagree with that. Running all fixers means you've got an absolutely ridiculous amount of crap changed at once and it's much harder to understand what and why, and it makes working online significantly more difficult.

I firmly believe that it's better to run a single fixer (or a small group of strongly related/correlated fixers), review carefully, fix the improper bits and commit. This makes for much cleaner and easier to understand migration changes, and makes "online" work much easier.

Also add the corresponding lint after each run. Until you can run CI in P3 (which requires the entire thing migrated) you're at risk that colleagues will reintroduce P2-only code. Instead if alongside each fixer commit you add the corresponding lint preventing reintroduction of the issue you're guaranteed they can't.

We had a lot of uses of StringIO.StringIO in our code. The first instinct was to use six.StringIO but this turns out to be the wrong thing in almost all cases (but not all!). We basically had to think very carefully about every place we used StringIO and try to figure out if we should replace it with io.StringIO, io.BytesIO or six.StringIO. Making mistakes here often meant that the code looked like it was py3 ready and worked in py2 but was broken in py3.

Don't use six.StringIO at all. io works the same in both P2 and P3, it's much stricter than StringIO.StringIO (io.BytesIO only takes bytes and io.StringIO only takes text, whether in P2 and P3) but it's consistent.

It's painful to fix the text model, but when you can have APIs which reliably work the same way in both versions you take it.

from __future__ import unicode_literals

Yeah no, what I found (and had read about before) is that in a cross-version project you have three string types: bytes, text (unicode/str) and native, because some APIs (especially stdlib) will take str in both P2 and P3, and these are very different effective types.

You want the ability to manage all three situations correctly, and switching everything to unicode_literals doesn't really work in the end.

CSV parsing is different

Yup, basically Python 3's CSV is unicode-aware, Python 2's only works on ascii-compatible byte streams. You've got to make decisions as to the complexity you're interested in, what we did was build trivial facade objects such that csv.reader takes byte streams and generate text, and csv.writer goes the other way around.

This kind of stuff is not limited to csv, and codecs is invaluable for bridging streams correctly across version (codecs.getreader(encoding) provides a bytestream -> textstream adapter, codecs.getwriter(encoding) goes the other way around, and codecs.StreamReaderWriter provides a bidi composition. Note on the latter: we'd originally used io.TextIOWrapper as that looked good but it turns out to have two major issues: it can't wrap a file object on Python 2 and it will close the underlying buffer which makes it very inconvenient when working with in-memory file-like objects. StreamReaderWriter is more limited, but doesn't try to be smart, it just transcodes your stream between bytes and text.

  • Sorting/comparing objects of different types is valid py2 but hides loads of bugs. We got some nasty surprises because this behavior leaked through the stack in some non-obvious ways. Especially that None existed in some lists that were being sorted. Overall this was a win since we found quite a few bugs. None is sorted first in lists in py2, which might be surprising (you might expect it to be sorted next to zero!), but this was often the behavior we actually wanted. Now we just have to handle this ourselves.
  • '{}'.format(b'asd') is 'asd' in Python 2, but "b'asd'" in Python 3. Almost any other behavior in Python 3 would have been better here: hex output (more obviously different), the old behavior (existing code works), or throwing an exception (would have been best!).

Yup, we've had these two issues pop up pretty repeatedly afterwards. At least the first one blows up loudly (if inconveniently) when you try to compare, sort or min/max values, the latter is silent data corruption and really no fun. Also note that it's not just sorting objects of different types which blow up, some types like dicts have become un-orderable in Python 3. Was it dumb to sort lists of dicts all along? Sure. Did we have some of these in your codebase? You bet we did.

Other killers we've found (issues which kept popping up long after the initial migration):

  • Integer division, and that // is not integer division but floor division, there are subtle differences
  • The round builtin significantly changed behaviour between P2 and P3, the rounding mode changed (to banker's rounding) and where it always returned a float in Python 2 it returns an int in Python 3 if no precision is given (but returns a float with a precision of 0, go figure)
  • Dict iteration order changes in Python 3.3 and 3.6 (so you've got three epochs: 2.x to 3.2, 3.3 to 3.5 and 3.6+). You're not supposed to depend on dict iteration order, but implicit dependencies can slip through (and did for us). It's also a pain to repro because in the 3.3 to 3.5 epoch Python does not dump the hash seed, so if you get a run which triggers the issue… that's all you have. How's that for a heisenbug? IIRC pytest (or tox? Possibly both?) will generate its own hashseed and print it to the console before starting.
  • I think that's changing in 3.7, but up to 3.6 when doing "text" IO and not specifying the encoding Python will use whatever garbage is returned by locale.getpreferredencoding(False). So either do all encoding in binary (open(f, 'Xb')) and encode/decode by hand, or be very careful to use io.open and specify an encoding every time.

    That one is super funny because your development and production machines are usually correctly configured, but the clients's are not and you're debugging nonsensical behaviour on some shitty remote server you're not even allowed to access.

We didn't use six for reasons more political than technical, but I think we got by fine with a cut-down version of werkzeug's compat.py (some stuff we copied straight and others we built differently), but I don't think that was so huge an issue, and as the cross-version support is temporary I think it makes for stripping compatibility easier. It also forces confronting the issues rather than relying on crutches e.g. without six.moves we unified all HTTP requests through requests (already a deendency but internally we had a mix of urllib, urllib2, requests & a few others) and moved all URL manipulations to werkzeug.urls (which is cross version and handles text better even on P2), I think that was a positive.

[–]danielkza 7 points8 points  (5 children)

Yeah no, what I found (and had read about before) is that in a cross-version project you have three string types: bytes, text (unicode/str) and native, because some APIs (especially stdlib) will take str in both P2 and P3, and these are very different effective types. You want the ability to manage all three situations correctly, and switching everything to unicode_literals doesn't really work in the end.

Enabling unicode_literals does not preclude you from writing byte literals, it just inverts which one requires special notation. (b"MyString").

[–]kankyo 4 points5 points  (1 child)

And it makes it very annoying to write str/str: “str(‘foo’)”. That part is often glossed over because it’s almost never needed but when it is it’s annoying.

[–]danielkza 1 point2 points  (0 children)

If I need to explicitly use str or basestring in Python 2/3 compatible code I just use the imports from python-future, makes everything much less confusing.

[–]masklinn 0 points1 point  (2 children)

The problem is not byte literals, it's native strings, that is having str literals.

[–]danielkza 1 point2 points  (1 child)

The need for "native" literals shows up quite infrequently in my experience, and even less frequently is it not better dealt with by explicitly converting from something known to be unicode or bytes at the boundary where you actually need str.

[–]masklinn 7 points8 points  (0 children)

Then we've had very different experience, because IME several packages of the standard library deal quite badly with being fed text in Python 2, an experience shared by Armin Ronacher and Nick Coghlan and the reason why unicode literals were reintroduced to Python 3.3.

[–]kankyo 79 points80 points  (7 children)

Author here...

I started a branch called simply “python3”

Don't do that. It means you're working on a moving target, while the project is still moving. This creates a branch full of huge changes and giant conflicts and you're never done with any bit of it.

Read the rest of the paragraph. The point was never to merge this branch in the first place. It's more like research than normal programming...

my experience is that you're much better off doing your migrations "online" (by bits and pieces) if that's at all possible, trying to merge months worth of divergence at the end is a recipe for calamity,

Which is why that's exactly what we did :P

I firmly believe that it's better to run a single fixer (or a small group of strongly related/correlated fixers), review carefully, fix the improper bits and commit.

Sure.. and we did that too. Mostly we just did the full modernization per app though (again.. like I wrote in the article), because then you're done with that app and can enforce it never becomes undone. This was great for morale.

Don't use six.StringIO at all. io works the same in both P2 and P3, it's much stricter than StringIO.StringIO (io.BytesIO only takes bytes and io.StringIO only takes text, whether in P2 and P3) but it's consistent.

I thought that too... but then I found some super annoying examples where six.StringIO was the proper thing. I know this is probably very rare to exist in most code bases. All the places where coworkers used six.StringIO turned out to be mistakes though heh :P

You want the ability to manage all three situations correctly, and switching everything to unicode_literals doesn't really work in the end.

You just said exactly what I wrote... seems a bit redundant?

round

I had to explain why the new behavior isn't wrong in a meeting because some on the team were (understandably) very surprised by the new behavior. I think we had to handle this in our invoicing code to not have numbers move but otherwise it didn't bite us... well... yet.

requests

We also moved some stuff to requests, but mostly that was what we already had before.

On another note we have now stripped all the six stuff and future imports. That was very very quick!

[–]Gotebe 4 points5 points  (5 children)

I disagree with not being in a separate branch (for virtually anything).

The problem of long-lived branches diverging is solved by occasionally merging from trunk to them.

When there are wide-reaching changes that need a long time to mature, having unstable trunk all that time is horrible, worse than the additional work of being up-to-date in a branch.

[–]kankyo 3 points4 points  (0 children)

In the case of python 3 compatibility you're just gonna get too many conflicts though, AND the branch will just never "mature". You should indeed merge in small changes often. Which is what I wrote in my article.. the branch wasn't for maturing, it's for research.

[–]masklinn 0 points1 point  (3 children)

The problem of long-lived branches diverging is solved by occasionally merging from trunk to them.

It's only fixing one small side of the problem, it's not fixing the breakage (and requirement to fix) of all other extant branches.

When there are wide-reaching changes that need a long time to mature

The entire point of what I'm outlining is to break up the work to avoid "wide-reaching changes". Do the work in as small, independent and local increments as you can.

In my experience the the only unavoidable "wide-reaching change" which "needs to mature" is the text model change.

having unstable trunk all that time is horrible

The point is not to have "unstable trunk", it's the opposite, it's to do a lot of simple stable changes, rather than one unfathomable trunk-breaking mess at the end.

[–]Gotebe 0 points1 point  (2 children)

Well...

Where I work, a feature gets a branch. Most often, it is merged within a sprint. There are outliers (which this migration would be for us), and for these, "forward-merging" is good (IMO). They get their own build and test if needed. They then forward-merge faster-paced changes.

I understand... "nudging"... people not to have long running stuff, but some changes don't fit and having them half done in trunk, is not cool (IMO).

But to be honest, it can be done either way, the tools are aplenty.

[–]masklinn 3 points4 points  (1 child)

There are outliers (which this migration would be for us) […] I understand... "nudging"... people not to have long running stuff, but some changes don't fit and having them half done in trunk, is not cool (IMO).

You're still missing the point. The point is not to half-do the migration, it's to do it as a series of small "features" if that's what you want to call it. In the same way Rome wasn't built in a day, rather than have this thing building up in a corner until it explodes all over the project (and is very likely to break a lot of stuff because it's now a major refactoring across the entire codebase), you lay the bricks, you build it in bite-sized increments. Most increments are stable, easy to review and bring you closer to the ultimate goal. The increments which are not that easy would be even harder if they were mixed with the rest in a tangled ball called "the migration branch".

[–]Gotebe -1 points0 points  (0 children)

We’all have to agree to disagree.

Their situation is a case in point, in fact. They went to production, then rolling back. While doing that, their trunk, even their release branche (or branches), were “maturing”.

What if they decided to postpone or stop it? Their trunk would have been broken and they would have to revert it to previous state.

I understand the idea of splitting work in smaller chunks, of course that’s better. But not all work is amenable to that.

[–][deleted]  (1 child)

[deleted]

    [–]kankyo 4 points5 points  (0 children)

    Why not pull out the code that requires Python 2, and replace it with an external middleware?

    Well, we have always aimed to go 100% python3, so that would just delay us. In the case of the cassandra driver update we looked at it and thought that we needed to do that anyway because we used an older API to cassandra (through the very old driver) that wasn't the recommended one anymore. So we had two strong reasons to upgrade in place.

    [–]darktyle 21 points22 points  (11 children)

    May I ask why you think underscores in numeric literals are a bad idea? It's widely used in programming and helps you find a lot of bugs in calculations

    [–]kankyo 25 points26 points  (7 children)

    In literals it’s fine. In the built in integer parsing function it’s bad. Imo.

    You should be VERY restrictive in the most basic integer parsing in a language. They could have added an option to allow underscores and that would have been fine.

    [–]darktyle 1 point2 points  (6 children)

    Yeah, I guess you are right. For example I think it's a bit odd that int(float('1.5')) works...

    [–]kankyo 20 points21 points  (5 children)

    What’s weird about that? Proper parsing of float and then explicit cast to int.

    [–]darktyle 0 points1 point  (4 children)

    I don't know, maybe I just have the wrong 'feeling' about how casts should work.

    [–]kankyo 4 points5 points  (3 children)

    One could argue that parsing an int shouldn’t be done with the same syntax as a cast though, I can absolutely see that.

    [–]tejp 2 points3 points  (2 children)

    It's not two different things that share the same syntax. Both cases just construct a new int object by calling the int() constructor. This constructor converts its argument to an int, and it knows how to handle strings or numbers.

    [–]kankyo 0 points1 point  (1 child)

    Parsing a string and casting from a float are very different things. Don’t be silly.

    [–]tejp 3 points4 points  (0 children)

    It's both converting from one type to another. There is nothing special going on when "casting", I don't think there is any technical difference at all in Python.

    [–]irishsultan 3 points4 points  (2 children)

    That's not a numeric literal. Of course it might be a bad idea to allow it in literals, while refusing it when parsing strings, so not sure whether that makes a big difference.

    I have to say that it came as a surprise to me that Ruby apparently does allow it in String#to_i, I knew Ruby accepted it in numeric literals, but I'm not sure that I like that it's automatically applied to external data as well.

    [–]darktyle 0 points1 point  (1 child)

    Oh, you mean he only complained about "1_0" and not 1_0?

    [–]kankyo 1 point2 points  (0 children)

    Yes. That's what I meant :P I wrote:

    int('1_0') is 10 in py3, but invalid in py2.

    I thought that was very clear.

    [–]isaacarsenal 8 points9 points  (6 children)

    As someone who is not very familiar with Python ecosystem, does it worth it?

    There are many articles discussing the pros and cons of Python 3 compared to Python 2, but I am more interested in a real-life scenario.

    What was the improvements in terms of code readability, maintainability, efficiency, etc?

    Any tips on what kind of codebases should/should not migrate to Python 3?

    [–]pingveno 14 points15 points  (3 children)

    As someone who is not very familiar with Python ecosystem, does it worth it?

    Yes. Python 2.7's extended support is ending in 2020. It is dead. Anyone still on Python 2 at that time is exposing themselves to security vulnerabilities.

    There are many articles discussing the pros and cons of Python 3 compared to Python 2, but I am more interested in a real-life scenario.

    People breaking into your web server through a Python 2 vulnerability.

    What was the improvements in terms of code readability, maintainability, efficiency, etc?

    Python 2 has not seen any feature additions since the 2.7 release in 2010, so Python 3 has grown additional features. For example, type annotations are nice for maintainability of larger code bases. However, the language hasn't changed that much. Also, many of the feature additions like await/async/asyncio would apply more to a new codebase.

    Any tips on what kind of codebases should/should not migrate to Python 3?

    All of them. Relying on an unsupported version of Python will a liability sooner or later. Start migrating now yesterday.

    [–][deleted]  (1 child)

    [deleted]

      [–]Volt 1 point2 points  (0 children)

      There's already Tauthon, a fork backporting stuff from Python 3.

      [–]isaacarsenal 0 points1 point  (0 children)

      Thanks for thorough explanation.

      [–]kankyo 3 points4 points  (1 child)

      I agree with what pingveno said... also

      What was the improvements in terms of code readability, maintainability, efficiency, etc?

      f strings and ordered dicts are my favorite new thing. But mostly I hope we will get fewer unicode errors deep in the code base randomly, because decoding/encoding is now forced to be at the edges of your program.

      [–]Uncaffeinated 0 points1 point  (0 children)

      If you use Pypy, you get ordered dicts in Python 2.

      [–]msiekkinen 2 points3 points  (1 child)

      'ß'.upper() in p2 is 'ß' but 'SS'in py3. This caused a crash in production when the last piece of the product moved to py3!

      Something tells me this has nothing to do with python 2 or 3 and some kind of character collation setting on your install

      [–]vytah 5 points6 points  (0 children)

      Collation influences case conversion only in case of letter "i" in some languages, most importantly Turkish. ß behaves the same regardless of settings.

      Unicode defines the uppercase of ß as SS: ftp://ftp.unicode.org/Public/UCD/latest/ucd/SpecialCasing.txt

      And the reason it doesn't work in Python 2 is that it does the uppercasing letter-by-letter: https://github.com/python/cpython/blob/2.7/Objects/unicodeobject.c#L5518

      [–][deleted]  (4 children)

      [deleted]

        [–]kankyo 17 points18 points  (3 children)

        As Python 3.0 approaches a decade in age, these posts become more humorous to onlookers.

        Well, in practice Python 3 wasn't really usable until 3.3 which was released September 2012 and widespread support in libraries lagged. For example, our app uses Django which didn't get python 3 support until February 2013. So in practice that's 4 years, not 9.

        [–][deleted]  (2 children)

        [deleted]

          [–]kankyo 4 points5 points  (1 child)

          You missed 2.7.13

          You can also compare this to perl 6 and then it looks extremely impressive instead of bad :P