Let's say we have this kind of loop (pseudocode)
double d = 0.0
for i in 1..10 {
d = d + 0.1
print(d)
}
In C with printf("%f", d)
I get this:
0.100000
0.200000
0.300000
...
1.000000
In C++ with cout << d
I get this:
0.1
0.2
...
1
In Java with System.out.println(d)
I get this:
0.1
0.2
0.3 (in debug mode, I see 0.30000000000004 there but it prints 0.3)
...
0.7
0.799999999999999
0.899999999999999
0.999999999999999
So my questions are these:
Why is this simple code printed in Java so badly and is correct in C?
Since you are not comparing the same operations, you will get different result.
The behaviour of double
is exactly the same across different languages as it uses the hardware to perform these operations in each case. The only difference is the methods you have chosen to display the result.
In Java, if you run
double d = 0;
for (int i = 1; i <= 10; i++)
System.out.printf("%f%n", d += 0.1);
it prints
0.100000
0.200000
0.300000
0.400000
0.500000
0.600000
0.700000
0.800000
0.900000
1.000000
If you run
double d = 0;
for (int i = 0; i < 8; i++) d += 0.1;
System.out.println("Summing 0.1, 8 times " + new BigDecimal(d));
System.out.println("How 0.8 is represented " + new BigDecimal(0.8));
you get
Summing 0.1, 8 times 0.79999999999999993338661852249060757458209991455078125
How 0.8 is represented 0.8000000000000000444089209850062616169452667236328125