I learned that 0.1 + 0.2 != 0.3
due to the 0.1 and 0.2 are both not accurately 0.1 and 0.2.
However, why the following Python code prints out "True"?
slope1 = 1.0/100.0
slope2 = 0.1/10.0
print(slope1 == slope2)
Same applies to C:
double slope1 = 1.0 / 100.0;
double slope2 = 0.1 / 10.0;
int main(int argc, const char * argv[]) {
printf("%d", slope1 == slope2);
}
What did I miss or misunderstand?
It's just lucky. It isn't always. For example:
>>> 0.0001 / 100
1e-06
>>> 0.00001 / 10
1.0000000000000002e-06
Or two more similar cases, one lucky and one unlucky:
>>> 6.0 / 100
0.06
>>> 0.6 / 10
0.06
>>> 7.0 / 100
0.07
>>> 0.7 / 10
0.06999999999999999