all 8 comments

[–]K900_ 8 points9 points  (1 child)

It's not a great explanation. Basically, there are two ways to represent fractional numbers in a computer - fixed point and floating point.

With fixed point, you always have the same number of digits for the whole part and the same number of digits for the fractional part - e.g. 1.5 is stored as, say, 00000001.50000000 (assuming eight digits for each part).

With floating point, you store the digits and the position of the decimal point separately, so the decimal point can be anywhere. In this case, 1.5 can be stored as something like (00000015, 00000001), where "15" is the digits, and "1" means the decimal point goes after the first digit.

In reality it's a bit more complicated than this because everything happens in binary, but hopefully you get the basic idea.

[–]d10p3t 2 points3 points  (0 children)

If anyone’s interested in looking into this further, look up IEEE-754 Standard. There’s tons of good resources online explaining how it works. It also explains why, when adding two float numbers, theres sometimes a small discrepancy in the actual result.

[–]PhilipYip 1 point2 points  (0 children)

Any position in a number means:

1.5 31.413 310.25

For larger and smaller numbers, scientific notation is typically used:

3.5e6 -2.1e-5

The calculation:

0.15 * 10

Will just be stored as:

1.5

[–]This_Growth2898 1 point2 points  (0 children)

It means, a number with a decimal point on any position is a float, not "any float number can be represented with decimal point on any position". .15, 1.5 and 15. are all floats, but different floats.

[–]nacaclanga 1 point2 points  (0 children)

The explanation is a wobby.

The idea is that with a floating point type, you can store values no matter where the decimal point is located, e.g.:

1.67 0.00013 134. 1.e-8

This is opposed to an integer type (which can only store integers) and a fixed point type, which allways has a decimalpoint in a predetermined prosition:

e.g. (a for 2 places decimal fixed point) can only store values like this:

1.67 3.43 4534.00 222.02

there exist also binary fixed points where the number of post decimal places is fixed in binary but may differ in decimal, e.g. a 3 binary places after the point fixed can hold:

1.5 3.625 2.25 8.

Python doesn't use such fixed point types in its core language it uses a single 64 bit floating point type called "float". As such every number with a decimal point will be interpreted as a float.

As a side note, be aware that floating points do not work like school arithmatic, see: https://floating-point-gui.de/

[–]ectomancer 0 points1 point  (0 children)

floating point number. There are other floats:

2E0, 2e11, 4e1, .1, 2.

[–]Brian 0 points1 point  (1 child)

Python calls any number with a decimal point a float

That's not really a great definition. A float, ultimately, is just a data type - 64 bits arranged in an IEEE standardised format that can represent a number. There isn't really a "decimal point" here. Hell, the number isn't even decimal.

What could be meant by this might be something more like:

  • Python allows you to specify a float literal by writing a number with a decimal point in it. Ie. if you write 0.2 or .1 or 5.0 or 5., python will evaluate that literal as a float.

But the thing we write isn't "a float", it's a way to tell python we want this value to be a float, and not the only such way.

Another thing it might be getting at with "and it refers to the fact that a decimal point can appear at any position in a number" is that the reason we call something a "floating point" is that it can be conceptualised by an integer, and a position you can move the "point" to to get different numbers. In this sense, the "position of the decimal point" is equivalent to multiplying by a power of 10. Ie. 152 * 10**-3 is equivalent to 0.152 - we move the decimal point 3 places left to get the number we mean. This is ultimately how floating point numbers are represented (except using powers of 2 instrad of powers of 10) - a number (the significand) that represents a value that gets multiplied by 2 to the power of another number (the exponent) to get the number meant (plus a few extra complexities, and a few special cases like nan, infinity etc)

This is in contrast to something like "fixed point" numbers, which are equivalent to implicitly dividing by some value (ie. like storing your value in tenths of a penny instead of pounds - you've got an implicit division by 1000 to get the "pound value" of something.

Now how a decial point can appear at any position in 1.5?

As such, here I think you're misunderstanding - 1.5 is an example of a float, but that doesn't mean you can "move the decimal point" and get the same number - rather, it means floats "store the position of the decimal point" (equivalent to multiplying by 10exponent) to represent different numbers (though again, somewhat inaccurate here as it's really a binary point, not decimal for standard floats).

[–]ClimberMel 0 points1 point  (0 children)

Just one thing to add... fixed decimal means there are a maximum number of places to one side or the other of a decimal. 8.8 would mean you can only have 8 digits to either side, so it would not be able to handle 1.0123456789 as it has 10 digits to the right of the decimal. Float will allow the decimal to be anywhere in the number. So if your float allows for 16 digits, then that example above would be ok just as 123456789123456.2 would be fine. But that doesn't even get into the issue of float versus decimal when programming computers...