This might be a naive or loaded question, but after suffering through debugging floating point number issues, I have a question to ask:
If there are rounding issues with floats because of the whole base 2 vs. base 10 representation difference between floats and bigdecimals… why/when would you use floats vs. bigdecimals?
Almost all major programming languages have a BigDecimal library... soo
It seems to me that accuracy in math trumps any sort of performance bump you’d get by using floats….. so why hasn’t the software world just abandoned floats and said, “sorry, we’re going all-in on BigDecimal”?
The key question is whether your application would benefit from exact representation of terminating decimal fractions.
Being able to represent numbers such as 1.01 exactly is very, very useful in financial calculations.
On the other hand, when you are dealing with physical measurements the decimal result is itself just an approximation. I do not believe any physical quantity has been measured to the precision of IEEE 754 64-bit float, the commonest implementation of doubles.
There is a misconception that BigDecimal libraries remove rounding issues. They only help with numbers that are exactly representable as reasonably short decimal fractions. BigDecimal does no better than double at representing one third.