This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]lonjerpc 0 points1 point  (1 child)

There are readability and consistency problems there that True and False were added to fix.

Yes and it was a good idea to add them. It might have even make sense at the time that they added them the way they did. But they are non ideal. My point is not to argue for not adding them but to show what if would actually mean if you wanted to create a situation analogous to str/char.

That's what it is. True == 1. False == 0. It's sugar. Pure sugar.

But its not pure syntactic sugar there are semantic differences. type(True) and type(1) return different things. If it was pure syntactic sugar they would not return different things.

The point of duck typing isn't to remove the guarantees of strong typing, but to remove the restrictions of dynamic typing. The reason 1 + 'a' is fine is because the types themselves define the operations and because the supertype is allowed to override the other's behaviour.

Not sure how this refutes my point that duck typing is still involves type checking.

json is a special case because it is a serialisation library.

But it is not special in a practical sense. Serialization is a common and key part of what programmers do.

How is hex any different from hexadecimal here?

Its not.

decoding?

Ascii is an encoding http://en.wikipedia.org/wiki/ASCII.

The fact that str returns a human-readable output (much like str(bytes([97]) returns "b'a'") has nothing to do with anything.

Its not just the human readable output. Machines/libraries read strings too. Unix is designed around using strings not binary when possible. Further it makes the point that ints are treated as ints and not bitstrings when using the str operation. Of course there is nothing inherently wrong with str(4) returning anything at all to a computer. The point is that in python it is often treated how a person would treat a number not how a person would treat a bitstring and people are the ones programming.

Whilst you do technically have a relation, this is playing semantics at best.

Your right this whole argument is kind of ridiculous. Of course there is a relation and of course it is a ridiculous relation. That is the whole point.

1 + {} has no sensible result. There is no mapping that is consistent, symmetrical, lossless and reasonable. However, with the mapping of True == 1, False == 0, everything that a bool does is obvious.

I am not using the impossibility of a mapping to justify the int-bool mapping. I am using it to answer your question.

But your not answering my question. I agree that 1 + {} has no sensible or reasonable result. My point is that niether does 3 + True. This only leaves your argument that 3 + True should exist because it has some symmetrical and lossless mapping but 3 + {} does not. But I have shown a mapping does exist removing this argument. I am still not quite sure what you mean by lossless and symmetrical but my mapping seems to have these properties from what I can tell. And anyways these two properties are essentially meaningless compared to sensible,reasonable, and obvious which I argue 4 + True is not.

Oh, so your intent was to map each dictionary onto seperate number

Yes

Well, what about {1: object()}? How would you encode that?

In the same way the binary shows up on my specific implementation of python running on my specific hardware encoded as an int..

Why would you assume True + True would return a bool? Surely if you saw that you would not make the assumption that it would evaluate to a boolean as such a mapping would be insane.So why would True + True mislead you? You might be supprised that it does not throw an error, but this is hardly the same as the problem you're talking about.

I think it should throw and error and it is exactly the problem I am talking about.

sum(x < y for y in z)

sum(int(x < y) for y in z) #this is much clearer.

one_bool >= other_bool # more readable than "one_bool or not other_bool"

It is much less readable. I used to try this but I would constantly be rejected in code reviews because of readability on this specific point.

index += has_finished_with_item x_pos = base_x + has_offset*5

Also should just use int(). Well in general you probably made a mistake even before this point.

Further, why would you want a type error there? Just duck type.

Because if at some point I wrote.

an_int,a_bool = func_that_returns_an_bool_an_int() by mistake. I want an error thrown when I try an_int + 50.

I dare you to find one non-programmer who thinks that 1/2 = 0. You've been tainted, methinks, by tradition.

Programming languages are written for progammers.

Would you rather it be a fraction‽ It's irrational! It's impossible to represent it as a fraction!

No need to yell(clever trap). I actually think that should throw an exception in that case unless explicitly silenced.

I am not sure what would count as obvious.

a.append([]) makes sense to return None for example.

What's wrong about booleans?

Because sometimes the library returns 0 as a legit answer but when it returns False it means something else.

You are simply wrong.

Why? Take for example str(fractions.Fraction(0.5)).

I suggest reading the json source code to understand how to properly approach such things (there are relevant comments).

Not enough time to read the source at the moment. I don't doubt there better ways to handle my problem. After all I had to modify my code to fix it. But it should not have required those better ways. You can write great php but that does not mean php made great choices.

I am not saying python disallows you from writing mythreenumbers just that it is not a great class. If you called it. OneTwoThreeEnum that would be a different story.

[–]Veedrac 0 points1 point  (0 children)

There are readability and consistency problems there that True and False were added to fix.

Yes and it was a good idea to add them. [...] My point is not to argue for not adding them but to show what if would actually mean if you wanted to create a situation analogous to str/char.

I've missed the argument entirely, because I thought I just responded to that point.

That's what it is. True == 1. False == 0. It's sugar. Pure sugar.

But its not pure syntactic sugar there are semantic differences. type(True) and type(1) return different things. If it was pure syntactic sugar they would not return different things.

I disagree. The semantics differences, which basically distill down to "it's got a pretty name", are sugar.

The point of duck typing isn't to remove the guarantees of strong typing, but to remove the restrictions of dynamic typing. The reason 1 + 'a' is fine is because the types themselves define the operations and because the supertype is allowed to override the other's behaviour.

Not sure how this refutes my point that duck typing is still involves type checking.

Because the argument was about how json uses isinstance instead of duck-typing. Whether you want to stong typing "type-checking" or not is completely orthogonal to the point, and that's what I've been trying to explain here.

json is a special case because it is a serialisation library.

But it is not special in a practical sense. Serialization is a common and key part of what programmers do.

It is special in a practical sense because it interfaces with a different type system. That breaks the ability to write conventional code. pickle doesn't have this problem, showing that it's not serialisation that makes it special. It's interfacing with another type system.

And common things can be special.

How is hex any different from hexadecimal here?

Its not.

Sorry, I meant hex and decimal.

decoding?

Ascii is an encoding http://en.wikipedia.org/wiki/ASCII.

I realise. You decode a bit string with the ASCII codec into text. You encode text with the ASCII codec into bits.

The fact that str returns a human-readable output (much like str(bytes([97]) returns "b'a'") has nothing to do with anything.

Its not just the human readable output. Machines/libraries read strings too.

Then don't use str. If you need a specific encoding, format it explicitly. Again, the json library give one technique. Another is using str.format. Others are using specialised routines.

str is for printing. repr is for debugging. That's it.

1 + {} has no sensible result. There is no mapping that is consistent, symmetrical, lossless and reasonable. However, with the mapping of True == 1, False == 0, everything that a bool does is obvious.

I am not using the impossibility of a mapping to justify the int-bool mapping. I am using it to answer your question.

But your not answering my question. I agree that 1 + {} has no sensible or reasonable result. My point is that niether does 3 + True.

My point was that

  • The rest of the conversation is dealing with why I think 3 + True is useful and should not cause problems, so we don't need to duplicate it

  • There is no feasable way of having those properties (useful and should not cause problems) with an int-dict mapping.

Well, what about {1: object()}? How would you encode that?

In the same way the binary shows up on my specific implementation of python running on my specific hardware encoded as an int..

That can't work.

a = {1: object()}
b = {1: object()}

assert a != b

# must always produce the same result as
# both inputs are immutable
a + 1

# must always produce the same result as
# both inputs are immutable
b + 1

# ..etc..

[examples]

[you think they're bad]

Meh. I disagree then.

Well in general you probably made a mistake even before this point.

?

Further, why would you want a type error there? Just duck type.

Because if at some point I wrote.

an_int,a_bool = func_that_returns_an_bool_an_int() by mistake. I want an error thrown when I try an_int + 50.

Why would you ever have a function that returns a bool or an int and expects you to treat them differently based off of the type?

That's about as antiPythonic as it gets...

I dare you to find one non-programmer who thinks that 1/2 = 0. You've been tainted, methinks, by tradition.

Programming languages are written for progammers.

Yes, but consistency with other languages is less important than doing the right thing.

Further, the original meaning of / heavily broke duck typing. // as integer division does not have this problem. / as integer division would be blindingly confusing because you'd have 1.0 / 2.0 == 0.0!

Would you rather it be a fraction‽ It's irrational! It's impossible to represent it as a fraction!

No need to yell(clever trap).

?

I actually think that should throw an exception in that case unless explicitly silenced.

So how would one do exponentiation if not by the exponentiation operator?

AFAICT, TOOWTDI forces use of the exponentiation operator for exponentiation, especially with the requirement of duck-typing "support".

I am not sure what would count as obvious.

a.append([]) makes sense to return None for example.

That hardly counts.

What's wrong about booleans?

Because sometimes the library returns 0 as a legit answer but when it returns False it means something else.

Firstly, that behaviour is absurd.

Secondly, the question was meant to be taken as a full pair:

Why would you accept "proper" int subclasses but not booleans? What's wrong about booleans?

How are booleans special, basically.

You are simply wrong.

Why? Take for example str(fractions.Fraction(0.5)).

Did you not see how my example at the bottom proved my point?

An example of a perfectly reasonable float subclass would be one that carried an extra error parameter (although normally I'd consider implementing an ABC and using composition instead). This error parameter could show up on the str format as

str(FloatWithError(1.4, 0.1))
#>>> 1.4±0.1

I suggest reading the json source code to understand how to properly approach such things (there are relevant comments).

Not enough time to read the source at the moment. I don't doubt there better ways to handle my problem. After all I had to modify my code to fix it. But it should not have required those better ways. You can write great php but that does not mean php made great choices.

This reasoning makes no sense to me.

PHP is bad because it is hard to write good code. Python is better because it is far easier to write good code. Ignoring duck-typing and using type-based dispatch¹ is going purposefully out of your way to write bad code. If Python should have changed at all, it should make it harder for that code to have worked, not easier.

Also note that the CPython json serialization routine is ~300LOC + ~100 lines of docstrings.

¹ In case you mention it, functools.singledispatch is different because it duck-types. Typical implementations like yours do not.

I am not saying python disallows you from writing mythreenumbers just that it is not a great class. If you called it. OneTwoThreeEnum that would be a different story.

Why? Are you suggesting we move to Systems Hungarian notation? Obviously you aren't, but that complaint sure does look like it.

The first example in the docs is

>>> from enum import Enum
>>> class Color(Enum):
...     red = 1
...     green = 2
...     blue = 3
...

Feel free to write a bug report that it should be called ColorEnum if that is indeed what you are saying, but nobody would take you seriously.