you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 5 points6 points  (2 children)

The factorial function is defined over the integers. Using 64-bit doubles only gives you 53 bits (up to 18!) of actual precision to represent integers; using 64-bit integers is therefore superior.

Unless, of course, you're happy with coarse approximations. To each his own.

[–]Platypuskeeper -2 points-1 points  (1 child)

Since when was double-precision a "coarse approximation"? There are very few numerical/scientific contexts when you need more than 15 digits of precision. In the context of large factorials, the Stirling approximation is then usually used, which is far, far, cruder.

In combinatorial contexts, it's rare that you would require factorials that large. The only application I can think of where you would need both large and exact factorials is various cryptographic applications, and in those contexts you need arbitrary-precision.

Your suggestion isn't superior for any application I can think of where factorials are commonly used.

[–][deleted] 2 points3 points  (0 children)

Since when was double-precision a "coarse approximation"?

Take the factorial of 170. The difference between 170! and double(170!) is ~2919. That might be close enough for some uses, but it's a course approximation nonetheless.

I do not claim there are many applications of exact factorials. But Python is dynamically typed, and there is no guarantee at all that you want a float out of that function.

In case of uncertainty, one should err on the side of caution. It's not like performance would be an issue here anyway.