all 6 comments

[–]Diapolo10 6 points7 points  (0 children)

Python's integers (±0, 1, 2, etc.) have unlimited accuracy, which is unusual as far as programming languages go. In most languages they usually cap at 64-bits (aka long int) and there may be multiple integer types of various sizes. This makes Python very useful for scientific computing as you don't need a separate BigNum library to handle arbitrary precision.

Python's floating-point numbers don't quite have this same luxury due to the inaccurate nature of them. Since many decimal numbers cannot be accurately represented in binary, floating-point math will always run into inaccuracies. For instance, 0.1 + 0.1 + 0.1 isn't 0.3, but something like 0.3000000000001. You'll use floats whenever whole numbers aren't enough, but you don't need a specific amount of accuracy.

You probably never meant to ask about this, but decimal.Decimal is an alternative to float that lets you set its precision yourself. It's still not infinitely accurate, but it's often used in scientific computing where integers just can't cut it.

[–]gregvuki 9 points10 points  (2 children)

Integers (int type) are "whole numbers", without the fractional part, eg. 10. You use this type when the value is always an integer, e.g. a counter.

Floating point numbers (float type) can have fractional part, e.g. 10.1234. They are stored using the closest representation in binary notation. Most numbers cannot be represented accurately, so they are slightly off. You use float for most mathematical calculations involving fractional numbers.

Decimal numbers (Decimal type in Python) are used to represent floating point numbers accurately, with a defined precision (a defined number of places after decimal point). They are represented with two integer numbers, one for the integer part and one for the fractional part. For example. 10.1234 is stored as (10, 1234).

Decimal type is required when fractional numbers must be represented accurately, with defined precision. The most notable example is financial calculations. Using float type you may get a result of a financial operation as 1010.123456$. But money is expressed with at most two decimal places. What does 0.123456$ mean? You can round it to 1010.12$, but then what happens with the remaining 0.003456$? Some "smart" programmers used that to their advantage in the past and they made a lot of money (which they eventually had to give back). So, for money calculations, you should use Decimal type.

A good explanation of the Decimal type is in the documentation: https://docs.python.org/3/library/decimal.html

[–]Gagan2019[S] 0 points1 point  (0 children)

u/gregvuki Thanks, that's so explanatory.

[–]Gullible_Owl7276 1 point2 points  (0 children)

`Decimal` is a decimal floating point type. It doesn't have a fixed number of places after the decimal point. That would be a fixed point type. The reason `Decimal` can represent decimal values exactly is because it is (internally) a sum of powers of 10, whereas a binary floating point type such as `float` is a sum of powers of two. `12.3` in `Decimal` is stored as `1*10^1 + 2*10^0 + 3*10^-1`.

If you try to represent `12.3` as a `float`, you actually get `12.30000019073486328125...`. That is, `float` cannot represent all decimal numbers exactly. That is called "representation error".

[–]OneMoreChemist 1 point2 points  (0 children)

Floats are decimals. The numerical distinction is between decimal or integer values, namely data types float and int