Question why does this happen? Is this just a C language thing?
I'm following the cs50 course.
#include <stdio.h>
int main(void)
{
int testInt = 5;
printf("%f", testInt / 4.0);
}
Output is 1.250000 -- float value
Is this just a C language thing?
The answer is "because that's how the C language defines the operation."
It is common in many languages to promote an integer to a floating point before doing an operation with another floating point value.
If it didn't work this way, there would be many accidental loss-of-precision (or loss-of-information) bugs.