Some years ago, I helped write an application that dealt with money and insurance. Initially, we represented money with floating point numbers (a big no-no, I know). Most of the application was just adding and subtracting values, so there weren't any issues. However, specific portions dealt with percentages of money values, hence multiplication and division.
We immediately began suffering from floating point errors and had to do a major refactor. We used an arbitrary precision library which solved that issue. However, it didn't change the fact that you can end up with fractions of a cent. How are you supposed to round that? The short answer is "it's complicated."
Now I'm getting ready to begin work on a similar application to supplant the old one. I've been mulling this over for years. I always thought it would be easiest to create a money datatype that wraps an integer (or BigInteger
) to represent the number of pennies with a function to print it to a traditional, human-friendly $0.00
format.
However, researching this, I found JSR 354, the Java Money API that was recently implemented. I was surprised to discover that it backs its representation of money with BigDecimal
. Because of that, it includes specific logic for rounding.
What's the advantage to carrying fractions of a cent around in your calculations? Why would I want to do that instead of saying one cent is the "atomic" form of money?
This is a broad question because its answer differs for its implementation.
If I were to purchase 1000 items in bulk for $5, then each item would individually cost $0.005, which is less than what you claim to be the "atomic form" of money, $0.01.
If we considered $0.01 to be the lowest possible amount, then we wouldn't be able to handle calculations in specific situations, like the one in my example.
For that reason, the JavaMoney API handles many fractional digits, ensuring that no precision is lost in cases like these.