If we cast an integer to a float it needs to be rounded or truncated when it gets too large to be represented exactly by a floating-point number. Here is a small test program to take a look at this rounding.
#include <stdio.h>
#define INT2FLOAT(num) printf(" %d: %.0f\n", (num), (float)(num));
int main(void)
{
INT2FLOAT((1<<24) + 1);
INT2FLOAT((1<<24) + 2);
INT2FLOAT((1<<24) + 3);
INT2FLOAT((1<<24) + 4);
INT2FLOAT((1<<24) + 5);
INT2FLOAT((1<<24) + 6);
INT2FLOAT((1<<24) + 7);
INT2FLOAT((1<<24) + 8);
INT2FLOAT((1<<24) + 9);
INT2FLOAT((1<<24) + 10);
return 0;
}
The output is:
16777217: 16777216
16777218: 16777218
16777219: 16777220
16777220: 16777220
16777221: 16777220
16777222: 16777222
16777223: 16777224
16777224: 16777224
16777225: 16777224
16777226: 16777226
Values in the middle between two representable integers get sometimes rounded up, sometimes rounded down. It seems like some sort of round-to-even is applied. How does this work exactly? Where can I find the code that is doing this conversion?
The behaviour of this implicit conversion is implementation-defined: (C11 6.3.1.4/2):
If the value being converted is in the range of values that can be represented but cannot be represented exactly, the result is either the nearest higher or nearest lower representable value, chosen in an implementation-defined manner.
This means your compiler should document how it works, but you may not be able to control it.
There are various functions and macros for controlling the rounding direction when rounding a floating-point source to an integer , but I'm not aware of any for the case of converting integer to floating.