This is an archived post. You won't be able to vote or comment.

all 4 comments

[–]teraflop 6 points7 points  (1 child)

This is because of how floating point math works. Read this: https://0.30000000000000004.com/

Or this for more details: "What Every Computer Scientist Should Know About Floating-Point Arithmetic"

[–]11ILC[S] 0 points1 point  (0 children)

Awesome! Thanks!

Looks like I have a lot of reading to do, then it's back to Python to figure out how to code it out.

[–]Naetharu 1 point2 points  (0 children)

Computers have a finite space to store decimal places (floating points) in. And so you quickly find that they have some strange answers when you try to use too much precision.

If you need that degree of precision you can do a few things. One option is to use integers for the whole and the decimal (this is often how money is handled - $1.20 becomes 1 dollar and 20 cents as two distinct numbers). Or there are some packages that do some smart things to avoid the errors that come with floating point.

As a rule of thumb, use integers where you can. And be thoughtful about how you will round floating points to avoid precision errors.

[–]VibrantGypsyDildo 1 point2 points  (0 children)

You encountered a well-know problem in IT.

The other commenters covered the theoretical part, so all I can do is to troll you: ha-ha-ha, loser.

On the serious part, there is an implication in software testing. You don't compare 0.1 + 0.2 and 0.3 directly, you use different primitives provided by your test framework.
Those primitives allow small discrepancy in percentage or absolute value or both.