I am working on a code where I need to check if a certain variable that can take a double value has actually taken on an integer value. I consider a double variable to have taken on an integer value if it is within a tolerance of an integer. This tolerance is 1e-5.
The following is my code:
#define SMALL 1e-5
//Double that attains this is considered non zero. Strictly Less than this is 0
int check_if_integer(double arg){
//returns 1 if arg is close enough to an integer
//returns 0 otherwise
if(arg - (int)arg >= SMALL){
if(arg + SMALL > (int)(arg+1.0)){
return(1);
//Code should have reached this point since
//arg + SMALL is 16.00001
//while (int)(arg+1.0) should be 16
//But the code seems to evaluate (int)(arg+1.0) to be 17
}
}
else{
return(1);
}
return(0);
}
int main(void){
int a = check_if_integer(15.999999999999998);
}
Unfortunately, on passing the argument 15.999999999999998, the function returns a 0. That is, it deems the argument to be fractional, while it should have returned a 1 indicating that the argument is "close enough" to 16.
I am using VS2010 professional.
Any pointers will be greatly appreciated!
Yes, floating point is hard. Just because 15.999999999999998 < 16.0
, that doesn't mean 15.999999999999998 + 1.0 < 17.0
. Suppose you have a decimal floating-point type with three digits of precision. What result do you get for 9.99 + 1.0
in that type's precision? The mathematical result would be 10.99
, and rounded to that type's precision gives 11.0
. Binary floating-point has the same issue.
You can, in this particular case, change (int)(arg+1.0)
to (int)arg+1
. (int)arg
is accurate, and so is integer addition.