Search code examples
mathroundingprecision

How many decimal places to give exact result


Let's assume I have two decimal numbers a and b, each with two decimal places. I would now like to determine a factor x which, multiplied by a, gives b. This is x = b / a

Is there any way of calculating the minimum number of decimal places required for the result of b = a * x to be exact to two decimal places (without rounding)?

Example:

a = 76,33
b = 44,12 

x1 = 0,5780165072710599 (16 decimal places)
b1 = 76,33 * 0,5780165072710599
b1 = 44,120000000000002167

x2 = 0,578016507271059 (15 decimal places)
b2 = 76,33 * 0,578016507271059
b2 = 44,11999999999993347

x3 = 0,57801650727106 (14 decimal places; rounded)
b3 = 76,33 * 0,57801650727106
b3 = 44,1200000000000098

x4 = 0,5780 (4 decimal places; rounded like GAAP)
b4 = 76,33 * 0,5780
b4 = 44,11874

x5 = 0,5781 (4 decimal places; corrected, not rounded)
b5 = 76,33 * 0,5781
b5 = 44,126373‬ (it's 44,12* now, but far more away from 44,120 than the previous 44,11874)

Four decimal places are recommended by the "Generally Accepted Accounting Principles (GAAP)" to counteract rounding errors. In my case, however, I find that this leads to significant rounding problems when summing and multiplying large amounts.

Take for example b4 = 44,11874 and now book 1.000.000 times this position. You end up with 44.118.740. With b1 = 44,120000000000002167 the sum would be 44.120.000. The difference would already be 1.260, which would already make a difference for a currency amount.

Is it possible to calculate the minimum number of decimal places required to be on the safe side?


Solution

  • You have the following exact math equation:

    x = b / a
    

    However when you round x, you end up with:

    x' = x + e
    

    where e is an error term.

    If you round x to k decimal places, then e is going to be between -1/(2 10^k) and +1/(k 10^k). For instance if you round x to 4 decimal places, then e is going to be between -0.00005 and +0.00005.

    Then when you try to recompute b using x' instead of x, you get a value b' which also has an error term:

    b' = a x' = a x + a e = b + a e
    

    The error term on b' is now a e.

    So if you want the error term on b' to be at most 1 / 200, so that b' rounds to b at two decimal places, then you need the error term on x' to be at most 1 / (200 a).

    In most of your examples, a was always less than 100, so rounding to 4 decimal places was good.

    But in your last example, a is 1000000, so you need 8 decimal places for x.