I understand that floating point arithmetic as performed in modern computer systems is not always consistent with real arithmetic. I am trying to contrive a small C# program to demonstrate this. eg:
static void Main(string[] args)
{
double x = 0, y = 0;
x += 20013.8;
x += 20012.7;
y += 10016.4;
y += 30010.1;
Console.WriteLine("Result: "+ x + " " + y + " " + (x==y));
Console.Write("Press any key to continue . . . "); Console.ReadKey(true);
}
However, in this case, x
and y
are equal in the end.
Is it possible for me to demonstrate the inconsistency of floating point arithmetic using a program of similar complexity, and without using any really crazy numbers? I would like, if possible, to avoid mathematically correct values that go more than a few places beyond the decimal point.
double x = (0.1 * 3) / 3;
Console.WriteLine("x: {0}", x); // prints "x: 0.1"
Console.WriteLine("x == 0.1: {0}", x == 0.1); // prints "x == 0.1: False"
Remark: based on this don't make the assumption that floating point arithmetic is unreliable in .NET.