I want to convert a double to a string with a given number of decimal places in C++ as well as in C# and I want the results of those conversions to be the same in both languages. Especially C++ conversion should be equal to the conversion in C#.
As an example I want to convert a double value of 3.15 to a string with one decimal place.
In C# I can do the following:
// C#
double d = 3.15;
string str0 = d.ToString("0.0");
Then the resulting variable str0 will contain "3.2". This is the expected rounding result.
In C++ there are several ways for such a conversion:
// C++
double d = 3.15;
char str1[256];
sprintf(str1, "%.1f", d);
std::stringstream stream;
stream << std::fixed << std::setprecision(1) << d;
std::string str2 = stream.str();
std::string str3 = std::format("{:.1f}", d); // C++20
Then the resulting variables str1, str2, str3 will contain "3.1". The rounding differs from the one in C#.
I am familiar with the concept of floating point representation of numbers. I see that 3.15 is internally represented by a number close to it (3.1499999999999999). I understand why C++ string conversion results are equal to "3.1".
Both languages implement IEEE 754 standard. So I assume C#'s internal floating point representation equals to the one in C++. As a validation for this assumption I can do the following in C#:
// C#
double d = 3.15;
var str_g20 = d.ToString("G20");
Then str_g20 will contain "3.1499999999999999".
My questions are:
Is there a way to achieve the same conversion rounding results in C++ as they are in C# using standard libraries? Did I miss some string formatting character flags from the documentation?
I can round the double value to the given decimal places manually before the conversion to string. E. g. I can do the following:
// C++
double d = 3.15;
d = round(d * 10.0) / 10.0;
std::string str4 = std::format("{:.1f}", d); // C++20
Then str4 will contain "3.2". This looks similar to the C# result. It works for the example above. But is this how the conversion is implemented in C# internally and will rounding before conversion always lead to the results similar to those of C#?
Thank you in advance
The C# behavior is caused by a kludge I have written about previously. Essentially, d.ToString("0.0")
does not round its operand d
to a decimal numeral with one digit after the decimal place. Instead, it effectively performs two steps:
d
is rounded to 15 significant decimal digits.So:
double d = 3.15;
, 3.15
is converted to the nearest value representable in the IEEE-754 binary64 format (used for double
). That values is exactly 3.149999999999999911182158029987476766109466552734375.toString
rounds this to a number with 15 significant decimal digits, 3.15000000000000. This is part of its default standard internal processing of all numbers."0.0"
was requested, toString
rounds that prepared number to one digit after the decimal point. 3.15 is a tie between 3.1 and 3.2, and it rounds to the even digit, producing 3.2.This behavior does not conform to an IEEE-754 rounding or conversion operation, and the C++ standard does not provide any operation for this. To get such a behavior in C++, you would have to write your own code or use third-party code.
Instead of trying to get the C# behavior in C++, you might try to get the C++ behavior in C. That ought to be easier, because all you need is correct rounding conversions in C#. Unfortunately, I am not sure C# provides that; I am not familiar with all the options of toString
and do not know if any of them will perform a correct single-step conversion instead of the kludged two-step conversion.
For some formatting, notably the conversions requested with G17
or R
, toString
may use 17 digits instead of 15.
I speculate that reasons for this behavior of toString
might be that Microsoft wrote a core subroutine to convert double
values to 15 significant digits and much of the rest of toString
builds upon that or that doing this double rounding is an attempt to conceal or cure some of the rounding issues that arise in floating-point arithmetic (or both). Such attempts may conceal errors in simple conversions between decimal character sequences and binary floating-point and back but fail when further arithmetic is performed, compounding the rounding errors.