While learning about floating point arithmetic, I came across something, I quote: 'a float/double can't store 0.1 precisely".
There is a question on SO pointing the same thing and accepted answer is also very convincing. However I thought of trying it out on my own computer, so I wrote following program as below
double a = 0.1;
if (a == 0.1)
{
Console.WriteLine("True");
}
else
{
Console.WriteLine("False");
}
Console.Read();
and console printed True
. Shocking to as I was already convinced with something else. Can anyone tell me what's going on with floating point arithmetic? Or I just got a computer that store numeric values as base 10?
Your program is only checking whether the compiler is approximating 0.1 in the same way twice, which it does.
The value of a
isn't 0.1, and you're not checking whether it is 0.1. You're checking whether "the closest representable value to 0.1" is equal to "the closest representable value to 0.1".
Your code is effectively compiled to this:
double a = 0.1000000000000000055511151231257827021181583404541015625;
if (a == 0.1000000000000000055511151231257827021181583404541015625)
{
Console.WriteLine("True");
}
else
{
Console.WriteLine("False");
}
... because 0.1000000000000000055511151231257827021181583404541015625 is the double
value that's closest to 0.1.
There are still times you can see some very odd effects. While double
is defined to be a 64-bit IEEE-754 number, the C# specification allows intermediate representations to use higher precision. That means sometimes the simple act of assigning a value to a field can change results - or even casting a value which is already double
to double
.
In the question you refer to, we don't really know how the original value is obtained. The question states:
I've a double variable called
x
. In the code,x
gets assigned a value of 0.1
We don't know exactly how it's assigned a value of 0.1, and that detail is important. We know the value won't be exactly 0.1, so what kind of approximation has been involved? For example, consider this code:
using System;
class Program
{
static void Main()
{
SubtractAndCompare(0.3, 0.2);
}
static void SubtractAndCompare(double a, double b)
{
double x = a - b;
Console.WriteLine(x == 0.1);
}
}
The value of x
will be roughly 0.1, but it's not the exact same approximation as "the closest double
value to 0.1". In this case it happens to be slightly less than 0.1 - the value is exactly 0.09999999999999997779553950749686919152736663818359375, which isn't equal to 0.1000000000000000055511151231257827021181583404541015625... so the comparison prints False.