Search code examples
c++floating-pointmingwlimits

C++ 32bit vs 64bit floating limit


Given the code segment as follow, I just want to know

  • why the maximum value of long double is smaller in 64bit than that in 32bit?
  • why 64-bit version cannot expand as much digits as in 32-bit version to fill the "40" precision output?
  • it seems that the values of LDBL_MIN and LDBL_MAX are equal, is that a bug?

I have looked into the float.h files in my machine but cannot find the explicit definition of these macro constants.

Testing Code (Platform = Win7-64bit)

#include <cfloat>
#include <iomanip>
cout<<"FLT_MAX   ="<< setprecision(40) << FLT_MAX  << endl;
cout<<"DBL_MAX   ="<< setprecision(40) << DBL_MAX  << endl;
cout<<"LDBL_MAX  ="<< setprecision(40) << LDBL_MAX << endl;
cout<<"FLT_MIN   ="<< setprecision(40) << FLT_MIN  << endl;
cout<<"DBL_MIN   ="<< setprecision(40) << DBL_MIN  << endl;
cout<<"LDBL_MIN  ="<< setprecision(40) << LDBL_MIN << endl;

32-bit outcome (MinGW-20120426)

FLT_MAX  =340282346638528859811704183484516925440
DBL_MAX  =1.797693134862315708145274237317043567981e+308
LDBL_MAX =1.189731495357231765021263853030970205169e+4932
FLT_MIN  =1.175494350822287507968736537222245677819e-038
DBL_MIN  =2.225073858507201383090232717332404064219e-308
LDBL_MIN =3.362103143112093506262677817321752602598e-4932

64-bit outcome (MinGW64-TDM 4.6)

FLT_MAX  =340282346638528860000000000000000000000
DBL_MAX  =1.7976931348623157e+308
LDBL_MAX =1.132619801677474e-317
FLT_MIN  =1.1754943508222875e-038
DBL_MIN  =2.2250738585072014e-308
LDBL_MIN =1.132619801677474e-317

Thanks.

[Edit]: Using the latest MinGW64-TGM 4.7.1, the "bugs" of LDBL_MAX, LDBL_MIN seems removed.


Solution

  • LDBL_MAX =1.132619801677474e-317 sounds like a bug somewhere. It's a requirement of the standard that every value representable as a double can also be represented as a long double, so it's not permissible for LDBL_MAX < DBL_MAX. Given that you haven't shown your real testing code, I personally would check that before blaming the compiler.

    If there really is a (non-bug) difference in long double between the two, then the basis of that difference will be that your 32-bit compiler uses the older x87 floating point operations, which have 80 bit precision, and hence allow for an 80-bit long double.

    Your 64-bit compiler uses the newer 64-bit floating point operations in x64. No 80-bit precision, and it doesn't bother switching to x87 instructions to implement a bigger long double.

    There's probably more complication to it than that. For example not all x86 compilers necessarily have an 80-bit long double. How they make that decision depends on various things, possibly including the fact that SSE2 has 64-bit floating point ops. But the possibilities are that long double is the same size as double, or that it's bigger.

    why 64-bit version cannot expand as much digits as in 32-bit version to fill the "40" precision output?

    A double only has about 15 decimal digits of precision. Digits beyond that are sometimes informative, but usually misleading.

    I can't remember what the standard says about setprecision, but assuming the implementation is allowed to draw a line where it stops generating digits, the precision of a double is a reasonable place to draw it. As for why one implementation decided to actually do it and the other didn't -- I don't know. Since they're different distributions, they might be using completely different standard libraries.

    The same "spurious precision" is why you see 340282346638528859811704183484516925440 for FLT_MAX in one case, but 340282346638528860000000000000000000000 in the other. One compiler (or rather, one library implementation) has gone to the trouble to calculate lots of digits. The other has given up early and rounded.