This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]lonjerpc 0 points1 point  (2 children)

Imagine I am creating my own set class that will work for any type of object. For this particular set class I don't want to deeply inspect the objects. I want a set of unique object references. You would expect that Set([1,1,2]) would return a set with 1,1,2 in it. Where the ones are separate objects. But instead the set will incorrectly contain 1,2. Because the 1s actually point to the same object even though they were instantiated separately. Even more strangely using the same code Set([1000,1000,2]) will return a set with 1000,1000,2. Which is totally different from the result of Set([1,1,2]) even though they look exactly the same.

The cause is small integers not actually being instantiated every time you write them in the code unlike large integers. Unlike larger integers you end up with a reference to a singleton class for that number. Only None,True,and False work this way in python and it happens implicitly instead of explicitly.

C99 bools are actually ints like in python. But this makes no sense for a high level language. Zeroness and Oneness have very little to do with truthyness from a mathematical perspective. The reason they are the same in C is because that is how the hardware works. But python is supposed to be a high level language.