If JavaScript's Number and C#'s double are specified the same (IEEE 754), why are numbers with many significant digits handled differently?
var x = (long)1234123412341234123.0; // 1234123412341234176 - C#
var x = 1234123412341234123.0; // 1234123412341234200 - JavaScript
I am not concerned with the fact that IEEE 754 cannot represent the number 1234123412341234123. I am concerned with the fact that the two implementations do not act the same for numbers that cannot be represented with full precision.
This may be because IEEE 754 is under specified, one or both implementations are faulty or that they implement different variants of IEEE 754.
This problem is not related to problems with floating point output formatting in C#. I'm outputting 64-bit integers. Consider the following:
long x = 1234123412341234123;
Console.WriteLine(x); // Prints 1234123412341234123
double y = 1234123412341234123;
x = Convert.ToInt64(y);
Console.WriteLine(x); // Prints 1234123412341234176
The same variable prints different strings because the values are different.
There are multiple problems here...
You are using long
instead of double
. You would need to write:
double x = 1234123412341234123.0;
or
var x = 1234123412341234123.0;
The other problem is that .NET rounds double
s to 15 digits before converting it to string
(so before for example printing it with Console.ToString()
).
For example:
string str = x.ToString("f"); // 1234123412341230000.00
See for example https://stackoverflow.com/a/1658420/613130
Internally the number is still with 17 digits, only it is shown with 15.
You can see it if you do a:
string str2 = x.ToString("r"); // 1.2341234123412342E+18