I am confused about what max_digits10
represents. According to its documentation, it is 0 for all integral types. The formula for floating-point types for max_digits10
looks similar to int
's digits10
's.
To put it simple,
digits10
is the number of decimal digits guaranteed to survive text → float → text round-trip.max_digits10
is the number of decimal digits needed to guarantee correct float → text → float round-trip.There will be exceptions to both but these values give the minimum guarantee. Read the original proposal on max_digits10
for a clear example, Prof. W. Kahan's words and further details. Most C++ implementations follow IEEE 754 for their floating-point data types. For an IEEE 754 float
, digits10
is 6
and max_digits10
is 9
; for a double
it is 15
and 17
. Note that both these numbers should not be confused with the actual decimal precision of floating-point numbers.
digits10
char const *s1 = "8.589973e9";
char const *s2 = "0.100000001490116119384765625";
float const f1 = strtof(s1, nullptr);
float const f2 = strtof(s2, nullptr);
std::cout << "'" << s1 << "'" << '\t' << std::scientific << f1 << '\n';
std::cout << "'" << s2 << "'" << '\t' << std::fixed << std::setprecision(27) << f2 << '\n';
Prints
'8.589973e9' 8.589974e+009
'0.100000001490116119384765625' 0.100000001490116119384765625
All digits up to the 6th significant digit were preserved, while the 7th digit didn't survive for the first number. However, all 27 digits of the second survived; this is an exception. However, most numbers become different beyond 7 digits and all numbers would be the same within 6 digits.
In summary, digits10
gives the number of significant digits you can count on in a given float
as being the same as the original real number in its decimal form from which it was created i.e. the digits that survived after the conversion into a float
.
max_digits10
void f_s_f(float &f, int p) {
std::ostringstream oss;
oss << std::fixed << std::setprecision(p) << f;
f = strtof(oss.str().c_str(), nullptr);
}
float f3 = 3.145900f;
float f4 = std::nextafter(f3, 3.2f);
std::cout << std::hexfloat << std::showbase << f3 << '\t' << f4 << '\n';
f_s_f(f3, std::numeric_limits<float>::max_digits10);
f_s_f(f4, std::numeric_limits<float>::max_digits10);
std::cout << f3 << '\t' << f4 << '\n';
f_s_f(f3, 6);
f_s_f(f4, 6);
std::cout << f3 << '\t' << f4 << '\n';
Prints
0x1.92acdap+1 0x1.92acdcp+1
0x1.92acdap+1 0x1.92acdcp+1
0x1.92acdap+1 0x1.92acdap+1
Here two different float
s, when printed with max_digits10
digits of precision, they give different strings and these strings when read back would give back the original float
s they are from. When printed with lesser precision they give the same output due to rounding and hence when read back lead to the same float
, when in reality they are from different values.
In summary, max_digits10
are at least required to disambiguate two floats in their decimal form, so that when converted back to a binary float, we get the original bits again and not of the one slightly before or after it due to rounding errors.