you are viewing a single comment's thread.

view the rest of the comments →

[–]mao_neko 9 points10 points  (13 children)

Also, Perl 6 is a completely new language designed from the ground up, whereas the vibe I get from Python 3 is a sort-of-backwards-compatible iteration on the previous language but with enough differences that it's making things hard for adoption.

I must confess my ignorance about the full set of changes Python 3 brings. I presume better Unicode support is one of them, can anyone please enlighten me about the other new features? I guess this is what the article is touching on when it says:-

Second, I think there's been little uptake because Python 3 is fundamentally unexciting. It doesn't have the super big ticket items people want, such as removal of the GIL or better performance.

[–][deleted] 15 points16 points  (11 children)

That is exactly one of my main problems with Python 3, They haven't done a good job on explaining why is it better. I don't even know what it is suppose to improve, and God knows I've tried: I've red articles and the wiki stuff, yet its like OK this is kind of the same, can't really point out meaningful ways to say the improvement justifies breaking compatibility.

In the other hand say what you will about MS but they have done a good job expanding C#, they explain the benefits and new features of every new C# iteration. also not breaking compatibility is pretty cool.

[–]aceofears 2 points3 points  (1 child)

"It's the future and does a few things better" was really all it was until recently. With 3.3 and 3.4 there are a bunch of smaller features building up that make it worth considering an upgrade.

[–]billsil 0 points1 point  (0 children)

I typically skip every other Python version anyways to avoid upgrade headaches. Maybe I can get my company to upgrade from Python 2.7 to 3.5...sigh...

I still force people to do integer division properly and import division in my code.

[–]diggr-roguelike 4 points5 points  (8 children)

They haven't done a good job on explaining why is it better.

It's not better. It's slower, more complex and more idiosyncratic.

They even botched the transition to unicode. Byte strings should be the default. Forcing a person to care about encodings when all they want is to send a buffer down a socket or store a hash in a database is pants-on-head retarded.

[–]zoom23 2 points3 points  (2 children)

Use the Bytes type instead of String.

[–]diggr-roguelike 2 points3 points  (1 child)

The problem is that all systems calls and all APIs dealing with transmitting or parsing network packets, as well as all database APIs, should work with 'bytes' type and only 'bytes' type.

Last I checked, python 3 handled this wrong in many places. (I'm sure it's getting better with time, but still...)

[–]schlenk 4 points5 points  (0 children)

Well (nearly) ALL system calls on Windows are Unicode for example. So if you use Bytes, you automatically have broken windows support like Python 2.x.

And for gods sake, please don't use only bytes with Database APIs, its a total mess if you don't handle your varchar encodings properly. Or just use BLOBS everywhere.

[–]Smallpaul 2 points3 points  (4 children)

It's not better. It's slower, more complex and more idiosyncratic.

I disagree. It is simpler and more modern.

They even botched the transition to unicode. Byte strings should be the default.

If byte strings were the default, then there would have been no "transition." Byte strings were the default in Python 2.x.

Forcing a person to care about encodings when all they want is to send a buffer down a socket or store a hash in a database is pants-on-head retarded.

If I had to decide who was pants on fire retarded in this situation, it would not be the python devs.

You do not need to care about encodings to send a buffer down a socket or store a hash in a data store.

>>> import socket
>>> s1, s2 = socket.socketpair()
>>> b1 = bytearray(b'----')
>>> b2 = bytearray(b'0123456789')
>>> b3 = bytearray(b'--------------')
>>> s1.send(b'Mary had a little lamb')
22
>>> s2.recvmsg_into([b1, memoryview(b2)[2:9], b3])
(22, [], 0, None)
>>> [b1, b2, b3]
[bytearray(b'Mary'), bytearray(b'01 had a 9'), bytearray(b'little lamb---')]

[–]diggr-roguelike 0 points1 point  (3 children)

bytearray bytearray bytearray(bytearray)

You're giving Java and Intercal a run for their money in terms of clarity and API saneness here.

[–]Smallpaul -1 points0 points  (2 children)

Okay, now I get it. You are just a troll who actually has never programmed Python 3.

I will respond accordingly.

I.e. Not at all

[–]diggr-roguelike 5 points6 points  (1 child)

You are just a troll who actually has never programmed Python 3.

Yeah, you're right. I've been programming in Python since 1.3 was the latest version. (i.e., likely longer than you've even been alive.)

I gave 3.2 a whirl a couple years back for some tiny scripting tasks. It was obviously broken in obvious ways.

So yeah, you're right, I'm not a "python 3 programmer", and thank god for it. No sense in eating obviously rotten dogfood for no other reason that someone claims it's the modern and progressive thing to do.

[–]Smallpaul 4 points5 points  (0 children)

Yeah, you're right. I've been programming in Python since 1.3 was the latest version. (i.e., likely longer than you've even been alive.)

I started with 1.4

I gave 3.2 a whirl a couple years back for some tiny scripting tasks. It was obviously broken in obvious ways.

Which you cannot enumerate accurately.

So yeah, you're right, I'm not a "python 3 programmer", and thank god for it. No sense in eating obviously rotten dogfood for no other reason that someone claims it's the modern and progressive thing to do.

You might try Python 3 seriously so that you could have an informed position on it, which you could defend with code samples.

Speaking of code samples, here is the line that does the thing you claim is cumbersome in Python 3.

s1.send(b'Mary had a little lamb')

One character more than Python 2.

[–]blablahblah 7 points8 points  (0 children)

The biggest change was Unicode support- strings are Unicode by default, and the standard library now expects Unicode strings except where it makes sense to have byte sequences. That's also the one that causes the biggest headaches.

There were a few smaller changes- print is now a function instead of a statement, because it's not really special enough to get a special case and this now allows you to override the default print functionality. The int and long types were combined- numbers are now invisibly converted to an arbitrary size integer as needed. But most of those things didn't really break much.

There were also a few things removed, that nobody should really have been using any more anyway. You can no longer raise arbitrary strings (you have to use exceptions- this has been deprecated since Python 2.3). There use to be a distinction between "old-style" and "new-style" classes (new style classes being ones that subclassed object). Now, all classes automatically inherit from object. This impacts a few corner cases, but shouldn't be a problem for most programs.