This came up during testing where I have to compare values between actual output and expected output.
Code:
float nf = 584227.4649743827f;
printf("Output: \t %.9f \n", nf);
Output:
Output: 584227.437500
I clearly have some gaps in my knowledge in C, so could someone explain to me this situation:
0.027474382659420f
) in the print ?nf
?0.027474382659420f
during assignment.Any other suggestion related to this kind of problem in testing would be also much appriciated.
Why is there this deviation (0.027474382659420f) in the print ?
Because float
has an accuracy of about 7 digits, and your deviation starts at the seventh digit.
Is this only the limitation of print, or is it the float data type limitation?
It's a limitation of floating point numbers, so that also includes double
(although double
has a higher precision). It also has to do with the conversion between binary and decimal numbers, so for instance 0.2
is a repeating decimal (well, binary) in binary representation, so it is suspect to rounding errors, too, and might actually turn into something like 0.200000000000000011
.
which value is actually stored in the variable nf?
The one that you see printed. The 584227.4649743827f
that you specified most likely won't even exist in the binary of your compiled program and is "translated" to the actually used value during compilation.
How should I work with values like this so I don't lose information like having a deviation of 0.027474382659420f during assignment.
Use double
, that has an accuracy of about 15-17 digits. Also, you need to remove the f
from the 584227.4649743827f
, turning it into a double
constant instead. If that is not accurate enough, you may have to use external libraries instead for arbitrary precision numbers, such as GMP.
Your floating point numbers most likely adhere to the IEEE 754 standard, but there's no guarantee.