Search code examples
floating-pointprecisionfloating-accuracynumerical-stability

Numerical Stability - does Multiply/Divide give a more precise value than Divide/Multiply?


Consider the following code:

$result *= $oldFactor / $newFactor;  //using shorthand *= operator

which is actually this:

$result = $oldFactor / $newFactor * $result;  //division done first

I can also manually write it as this:

$result = $result * $oldFactor / $newFactor;  //multiplication done first

I am under impression that multiplication is a simpler operation and it does not suffer as much from rounding errors, compared to divide operation.

Also, I have this feeling that for most day-to-day human numbers, multiply operation will produce a "larger number" before it is divided (assuming numbers being used are often greater than 1). And larger numbers after being divided are more numerically stable. Example .. consider 5 * 7 / 2.3 where first operation (mult) is precise, as those numbers are represented exactly in binary. Then division is done and it's as precise as we are going to get. But consider 7 / 2.3 * 5, where first operation is divide, and already we produce a number that cannot be represented exactly in binary, and the next operation (mult) exaggerates any imprecision via multiplication.

My question is basically ... does this matter? Do I indeed lose precision when using divide first, or am I perfectly safe to use whichever ordering of operations that looks best for me and I will be getting the same result?


Solution

  • If your platform, toolchain and code comply with the IEEE-754 arithmetic model standard implementation, then both multiplication and division produce a result that has at most half an ulp (units in the last place) of error, assuming rounding to nearest mode, and assuming the input was exact. So, in principle, there's no difference.

    But there are subtleties, of course. As mentioned in the comment by Raymond Chen, it might depend on the values you are working with. First, when working with large numbers the 1/2 ulp error is larger than when working with small numbers, so taken 1 division and 1 multiplication using the same input, the error on the division result will be, of course, smaller than that of the multiplication.

    With a sequence of 1 multiplication and 1 division, and using a type of significant precision as double, the upper error in both cases should be at most 1 ulp, and the difference of operand application order should be very small in most cases. In the example from the comment this effect is very pronounced, since the machine has one digit significand.

    There are some particularities, obviously. One is the lack of commutativity of arithmetic operations using floating-point numbers. So doing (*,/) or (/,*) could give you very slightly different results, for example at the border of a floating-point block (one result is at the top of a floating-point exponent block, and the other is just above, at the bottom of the next floating-point exponent block), thus one of the results will have the twice the error from the other. Also, there could be overflow or underflow. So if two large numbers are multiplied first it could overflow to inf and that contaminates further operations. When dividing very small numbers first, you could get underflow, that is, a numerical result of zero, when it was supposed to be non-zero, or even you get a subnormal result, which has a reduced number of digits and thus a larger relative error.

    In all cases, if you care for tiny errors on a sequence with few operations, or if you have a sequence with lots of operations in your calculations, then you should do error analysis, a priori or a posteriori (running error analysis).

    EDIT: As informed on the comment to this answer, that which I refer to as an exponent block is actually called a binade, the set of floating-point numbers which have the same exponent.