Search code examples
c++cfloating-pointdoubleprecision

Double rounding error, even when using DBL_DIG


I am trying to generate a random number between -10 and 10 with step 0.3 (though I want to have these be arbitrary values) and am having issues with double precision floating point accuracy. Float.h's DBL_DIG is meant to be the minimum accuracy at which no rounding error occurs [EDIT: This is false, see Eric Postpischil's comment for a true definition of DBL_DIG], yet when printing to this many digits, I still see rounding error.

#include <stdio.h>
#include <float.h>
#include <stdlib.h>

int main()
{
  for (;;)
  {
    printf("%.*g\n", DBL_DIG, -10 + (rand() % (unsigned long)(20 / 0.3)) * 0.3);
  }
}

When I run this, I get this output:

8.3
-7
1.7
-6.1
-3.1
1.1
-3.4
-8.2
-9.1
-9.7
-7.6
-7.9
1.4
-2.5
-1.3
-8.8
2.6
6.2
3.8
-3.4
9.5
-7.6
-1.9
-0.0999999999999996
-2.2
5
3.2
2.9
-2.5
2.9
9.5
-4.6
6.2
0.799999999999999
-1.3
-7.3
-7.9

Of course, a simple solution would be to just #define DBL_DIG 14 but I feel that is wasting accuracy. Why is this happening and how do I prevent this happening? This is not a duplicate of Is floating point math broken? since I am asking about DBL_DIG, and how to find the minimum accuracy at which no error occurs.


Solution

  • generate a random number between -10 and 10 with step 0.3
    I would like the program to work with arbitrary values for the bounds and step size.
    Why is this happening ....

    The source of trouble is assuming that typcial real numbers (such as string "0.3") can encode exactly as a double.

    A double can encode about 264 different values exactly. 0.3 is not one of them.

    Instead the nearest double is used. The exact value and 2 nearest are listed below:

    0.29999999999999993338661852249060757458209991455078125
    0.299999999999999988897769753748434595763683319091796875  (best 0.3)
    0.3000000000000000444089209850062616169452667236328125
    

    So OP's code is attempting "-10 and 10 with step 0.2999..." and printing out "-0.0999999999999996" and "0.799999999999999" is more correct than "-0.1" and "0.8".


    .... how do I prevent this happening?

    1. Print with a more limited precision.

      // reduce the _bit_ output precision by about the root of steps
      #define LOG10_2 0.30102999566398119521373889472449
      int digits_less = lround(sqrt(20 / 0.3) * LOG10_2);
      for (int i = 0; i < 100; i++) {
        printf("%.*e\n", DBL_DIG - digits_less,
            -10 + (rand() % (unsigned long) (20 / 0.3)) * 0.3);
      }
      
      9.5000000000000e+00
      -3.7000000000000e+00
      8.6000000000000e+00
      5.9000000000000e+00
      ...
      -1.0000000000000e-01
      8.0000000000000e-01
      

    OP's code really is not doings "steps" as that hints toward a loop with a step of 0.3. The above digits_less is based on repetitive "steps", otherwise OP's equation warrants about 1 decimal digit reduction. The best reduction in precisions depends on estimating the potential cumulative error of all calculations from "0.3" conversion --> double 0.3 (1/2 bit), division (1/2 bit), multiplication (1/2 bit) and addition (more complicated bit).

    1. Wait for the next version of C which may support decimal floating point.