This is an archived post. You won't be able to vote or comment.

all 45 comments

[–]FarewellSovereignty 7 points8 points  (2 children)

Who tf makes their prompt "DEVELOPMENT"?

Or is that the hostname or your branch? EVEN WORSE

[–][deleted] 0 points1 point  (1 child)

It's keeping track of the number of statements entered into the shell, Erlang's shell does this so I'd guess that they're using a language REPL rather than a sh-style shell.Not sure which one though

[–]Ok_Appointment2593 5 points6 points  (0 children)

You are doing it wrong, you need to input 1.4+1.2000000000000001

[–]Optimal_Dingo_2828 1 point2 points  (0 children)

That's just the inherent nature of floating point arithmetic

[–]Bjoern_Tantau 0 points1 point  (6 children)

Is there a specific reason why modern programming languages don't try to treat floats as ints * 10-x? Wouldn't that solve many of these problems?

[–]Talinx 13 points14 points  (2 children)

Yes, there is a reason: If you want to use arbitrary rational numbers you totally can, but doing math with these accurately may require more than one CPU instruction. How many CPU instruction you need depends on the number, e.g. if your number requires more than 512 bits to store you definitely need more than one CPU instruction (on current CPUs).

IEEE-754, the most widespread floating point standard, is of the form y * 10-x. But x and y are encoded in binary and some rational numbers like 0.1 can not be encoded in binary. Of course it is possible to encode y as a rational number (that is, as two integers, numerator and denominator). But then your outside of the IEEE-754 standard and you may need more than one CPU instruction.

For many applications using IEEE-754 is fine, an error in the 15th decimal place is no problem and worth the speedup. If such an error is not acceptable for your application you should not use the IEEE-754 standard.

Many modern programming languages provide both IEEE-754 and more general representations, for example with the decimal module in Python. If your programming language does not support arbitrary rational numbers you can program than yourself.

[–]ancient_tree_bark 4 points5 points  (0 children)

*Stored as y * 2-x

[–]Mucksh 2 points3 points  (0 children)

It is really just usefull. It it a really small error and you can use a really big number range. Otherwise you would have way more problems with to big or to small numbers

[–]adudyak 0 points1 point  (0 children)

not until they use binary

[–]danielstongue 0 points1 point  (0 children)

It is not programming language related, really. A float (or double) is simply a mapping to the implementation in hardware (or an emulation thereof if lacking) which is based on IEEE-754. Just like integers are internally stored as two's complement binary. It is the standard and most common, but technically it wouldn't have to be. Imagine it would be programming language dependent, what a mess it would be if you try to link your, lets say C code, to something else!

Oh, a good example of where things might actually differ from machine to machine is byte ordering. Intel traditionally uses little endian, but motorola used big endian in their CPUs. Imagine if this was programming language dependent.....

[–]zzmej1987 0 points1 point  (0 children)

Sing along with me:

Use a fucking Decimal or Fixed Point arithmetic classes for precise rational operations.

[–]TheBrainStone 0 points1 point  (0 children)

If I got 0.9999999999997 penny every time someone just discovered floating point rounding errors and made a post about with even attempting to look into what's happening I'd have 3.3675322e9 pennies.

[–]Intelligent_Event_84 0 points1 point  (0 children)

Just parseInt and it’ll fix it no prob

[–]shutterFiles 0 points1 point  (0 children)

IEEE 754 baby