I found this out when writing a test program. I was wondering where this problem is located. Is it in the the C libs (printf), the clang compiler or Mac processor? I compiled this program using clang on a Mac. Here is the short program:
#include <stdio.h>
int main(void) {
int num = 12;
double temp = 0.0;
temp = num/10.0;
printf("%lf\n", temp);
temp = temp - 1.0;
printf("%lf\n", temp);
temp = temp * 10.0;
printf("%lf\n", temp);
int new_num = temp;
printf("%d\n", new_num);
int cast_num =(int)temp;
printf("%d\n", cast_num);
return 0;
}
This program works for all numbers except when num ends in 12. When num ends with a 12. new_num and cast num are = 1 and not 2 as it should. if you set num to 22, new_num and cast_num = 2 as it should. Even works as expected if num = 11,10,13,14, etc... it does not work when num ends in 12. So set num = 212, and replace line 10 with temp=temp-21. and the incorrect results happen. Change num to 213 and it works as expected.
Does anyone know the nature of this problem?Odd that it only shows up with a number ending in 12.
Any thoughts?
Apple LLVM version 10.0.0 (clang-1000.10.44.4) Target: x86_64-apple-darwin18.2.0 Thread model: posix
This is because of rounding issues. 2.000000 is not always 2!
Try the following:
temp = temp * 10.0;
printf("%.20lf\n", temp);
and you will probably observe, that temp is slightly below 2. So a cast to int will round it down to 1.
The problem is, that already 1.2
(12/10.0
) cannot be represented exact as floating point number, so already the first division is not exactly 1.2 but a bit less. This error propagates down until your cast. To fix this, round mathematically correct with round()
for example before casting it to int.