I am doing some operations and found no exact explanation to why I find a specific behaviour. Context:
Minimum working example:
#include <stdio.h>
#include <stdint.h>
int main()
{
printf("Operations with comas \n");
uint16_t a = (uint16_t)((24.2 - 0)/0.1); /* 241 Incorrect*/
uint16_t b = (uint16_t)((24.2 - 0.0)/0.1); /* 241 Incorrect */
uint16_t c = (uint16_t)((float)(24.2 - 0)/0.1); /* 242 Correct */
uint16_t d = (uint16_t)(24.2/0.1); /* 241 Incorrect*/
uint16_t e = (uint16_t)(242.0); /* 242 Correct */
printf("a %u \n" , a);
printf("b %u \n" , b);
printf("c %u \n" , c);
printf("d %u \n" , d);
printf("e %u \n" , e);
return 0;
}
I know that when using float the value cannot be expressed exactly. With IEEE-754
the value of 24.2 is
actually representing 24.200000762939453125
thus the truncation in case c
and e
is correct.
Why do case a
,b
,d
yield to the unexpected values? (I know that forcing a cast fixes the issue but I want to understant the cause)
According to the C standard, an unsuffixed floating constant has type double
, not float
, so all those constants with dots in your code are double
, and everything else gets promoted to double
.
Take the following code (notice the suffix f
on the values of the second expression):
#include <stdio.h>
#include <inttypes.h>
int main(void) {
uint16_t a = (uint16_t)((24.2 - 0)/0.1);
uint16_t b = (uint16_t)((24.2f - 0)/0.1f);
printf("%hu %hu\n", a, b);
}
The output is:
241 242
So the point here is just that different approximations are happening because different floating point precision is being used.