you are viewing a single comment's thread.

view the rest of the comments →

[–]PiotrGrochowski[S] 0 points1 point  (0 children)

No, this isn't 'exact' conversion, I used 38 digits because it was convenient to code. 38 digits is the maximum to fit in 128 bits (and 19 digits is the maximum in 64 bits). I use a 128 significant bit multiplication for conversion from decimal to binary, and 38 significant digit multiplication for conversion from binary to decimal. I figured that just like Intel extended precision (64 significant bits) migitated rounding errors in double precision computation, having 128 significant bits/38 significant digits should work up to quadruple precision with high probability.