Search code examples
pythonc++precisiondecimalformatarbitrary-precision

Floating point decimals in python


I have a python script within which I define a variable called dt and initialize it as such: dt=0.30. For some specific reason, I have had to convert my python script into a C++ program, but the two programs are written to produce the exact same results. The problem is that the results start to deviate at some point. I believe the problem occurs due to the fact that 0.03 does not have an exact representation in binary floating point in python; whereas in C++ dt=0.30 gives 0.300000000000, the python output to my file gives dt=0.300000011921.

What I really need is a way to force dt=0.30 precisely in python, so that I can then compare my python results with my C++ code, for I believe the difference between them is simply in this small difference, which over runs builds up to a substantial difference. I have therefore looked into the decimal arithmetic in python by calling from decimal import *, but I then cannot multiply dt by any floating point number (which I need to do for the calculations in the code). Does anyone know of a simple way of forcing dt to exactly 0.30 without using the 'decimal floating point' environment?


Solution

  • If your C compiler's printf runtime supports hex floating point output, here's a test you can try to see the differences between text conversion functions:

    Python:

    Python 2.7.10 (default, Aug 24 2015, 14:04:43)
    [GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin
    Type "help", "copyright", "credits" or "license" for more information.
    >>> a = float(0.3)
    >>> a.hex()
    '0x1.3333333333333p-2'
    

    C:

    #include <stdio.h>
    
    int main(void)
    {
      float x = 0.3;
    
      printf("%a\n", x);
      return 0;
    }
    

    Output from the compiled C program:

    0x1.333334p-2
    

    Note that the units-last-place (ULP) in the output differ by 1 and the float produced by Python looks like an IEEE double.

    It's normal to have minor ULP differences. It just shows you that there are differences in how runtime libraries convert floating point numbers and deal with rounding.

    Alternatively, you could print hex floats in Python and use them as constants in your C source:

    #include <stdio.h>
    
    int main(void)
    {
      float y = 0x1.333334p-2;
    
      printf("%f\n", y);
      return 0;
    }
    

    Or:

    #include <stdio.h>
    
    int main(void)
    {
      float y = 0x1.333334p-2;
      double z = 0x1.3333333333333p-2;
    
      printf("y = %f\n", y);
      printf("z = %f\n", z);
    
      return 0;
    }
    

    Output:

    y = 0.300000
    z = 0.300000