Unable to find the reason for the following piece of code:
#include <stdio.h>
int main()
{
float f = 0.1;
if (f == 0.1)
printf("True");
else
printf("False");
return 0;
}
The output is false.
#include <stdio.h>
int main()
{
float f = 0.1;
if (f == (float)0.1)
printf("True");
else
printf("False");
return 0;
}
Now shows the correct output. Whats the reason behind this?
Also what is the reason of this behavior.
#include <stdio.h>
main()
{
int n = 0, m = 0;
if (n > 0)
if (m > 0)
printf("True");
else
printf("False");
}
0.1
literal is double
. You loose precision here float f = 0.1;
You could say we loose precision during comparison again, so why isn't f == 0.1
true anyway? Because float
extends to double
, not the opposite. In C smaller type always extends to the bigger one.
Simplified your example we can say that double(float(1.0)) != 1.0
Possible solutions:
double
instead of float
as a type of f
.float
literals - replace all 0.1
with 0.1f
Better solution
Floating point variables have a lot of problems with comparisons. They, including this one, can be solved by defining your own comparison function:
bool fp_equal(double a, double b, double eps = FLT_EPSILON) {
return fabs(a - b) < fabs(eps * a);
}
The second part of the question:
Why the answer is false is because else
part always corresponds to the innermost if
block. So you were confused by formatting, the code is equivalent to:
#include <stdio.h>
int main()
{
int n = 0, m = 0;
if (n > 0) {
if (m > 0) {
printf("True");
}
else {
printf("False");
}
}
}