Search code examples
performanceprecisionarbitrary-precision

Fixed precision vs. arbitrary precision


A lot of modern languages have support for arbitrary precision numbers. Java has BigInteger, Haskell has Integer, Python is de-facto arbitrary. But for a lot of these languages, arbitrary precision isn't de-facto, and is instead prefixed with 'Big'.

Why isn't arbitrary precision de-facto for all modern languages? is there a particular reason to use fixed-precision numbers over arbitrary-precision?

If I had to guess it would be because fixed precision somehow corresponds to assembly instructions, so it's more optimal and runs faster, which would be a worthwhile trade-off if you didn't have to worry about overflow because you knew the numeric ranges beforehand. Are there any particular use cases of fixed-precision over arbitrary-precision?


Solution

  • It is a trade-off between performance and features/safety. I cannot think of any reason why I would prefer using overflowing integers other then performance. Also, I could easily emulate overflowing semantics with non-overflowing types if I ever needed to.

    Also, overflowing a signed int is a very rare occurrence in practice. I happens almost never. I wish that modern CPUs supported raising an exception on overflow without performance cost.

    Different languages emphasize different features (where performance is a feature as well). That's good.