Search code examples
c++doubleprecisioncoutfloating-point-precision

Why cout's default precision doesn't effect evaluated result?


Here is what I am thinking:

 #include <iostream>
 #include <iomanip>

  int main ()
 {
    double x = 10-9.99;
   std::cout << x << std::endl;
   std::cout << std::setprecision (16);
   std::cout << x;
   return 0;
  }

The above program prints 0.01 by evaluating x before setprecision () and a long number that is not exactly equal to 0.01, for x after setprecision ().cout has a default precision of 16 when printing floating point numbers in my machine. If precision is 16, the above value should be something like 0.0100000000000000 but it remains 0.01but when I setprecision () to 16, the program prints a long number containing 16 digits. So my question is, why cout doesn't prints all the digits according to types default precision. Why we need to force cout (by using setprecision ()) to print all the digits?


Solution

  • why cout doesn't prints all the digits according to types default precision.

    If you use std::fixed as well as setprecision, it will display however-many digits the precision asks for, without rounding and truncating.

    As for why the rounding accounts for the output...

    Let's get your code to print a couple other things too:

    #include <iostream>
    #include <iomanip>
    
    int main ()
    {
        double x = 10-9.99;
        std::cout << x << '\n';
        std::cout << std::setprecision (16);
        std::cout << x << '\n';
        std::cout << 0.01 << '\n';
        std::cout << std::setprecision (18);
        std::cout << x << '\n';
        std::cout << 0.01 << '\n';
        std::cout << x - 0.01 << '\n';
    }
    

    And the output (on one specific compiler/system):

    0.01  // x default
    0.009999999999999787  // x after setprecision(16)
    0.01  // 0.01 after setprecision(16)
    0.00999999999999978684   // x after setprecision(18)
    0.0100000000000000002    // 0.01 after setprecision(18)
    -2.13370987545147273e-16  // x - 0.01
    

    If we look at how 0.01 is directly encoded at 18 digit precision...

    0.0100000000000000002
       123456789012345678  // counting digits
    

    ...we can see clearly why it could get truncated to "0.01" during output at any precision up to 17.

    You can also see clearly that there's a different value in x to that created by directly coding 0.01 - that's allowed because it's the result of a calculation, and dependent on a double or CPU-register approximation of 9.99, either or both of which have caused the discrepancy. That error is enough to prevent the rounding to "0.01" at precision 16.

    Unfortunately, this kind of thing is normal when handling doubles and floats.