Search code examples
javafloating-pointfloating-accuracyfloating-point-conversion

Wierd result in formatting float value to string using String.format


I'm kinda recent to Java. I understand in general the issues around floating point precision and conversion but not sure why am seeing such a ridiculous result in Java... I did this...

String str1 = String.format("%.02f", 0.001921921f);
String str2 = String.format("%.02f", 9.0921921f);
String str3 = String.format("%.02f", 91.21921f);
String str4 = String.format("%.02f", 911212.09f); // WIERD: Prints 911212.06 !!!
String str5 = String.format("%.02f", 1212f); 

And saw these values in the strings in debug mode (Eclipse debugger / Eclipse Platform 4.2.1 / Java SE 6 (Mac Mavericks OSX default))...

str1 = "0.00"
str2 = "9.09"
str3 = "91.22"
str4 = "911212.06"   ===> What the heck ?? Should be "911212.09" or some rounding of that?
str5 = "1212.00"

I don't understand this at all.
Lemme also explain what I'm ultimately trying to do...
I have a bunch of input float values to varying decimal precision, converted from strings like "9.34", "99.131", etc).
I want to truncate all decimal places beyond the second and get an INT with all the digits (no decimal places, no floor / ceiling / rounding etc) ie. 19.2341 will become 1923, 19.2359 will also be 1923 and not 1924.
Now I tried doing things like

int ival = (int)(float_val * 100.0);

But that has precision/accuracy issues. Eg. if float_val contained 17.3f then ival becomes 1729.
I thought using formatted strings might help, but that also does rounding and while trying it I also saw the wierd case above.
Any idea what I'm doing wrong ?
For now, for my situation I'll just manipulate the original string values to truncate the decimal places and remove the decimal point and then convert to int.


Solution

  • A float has 23 bits of precision. To represent 911212.09, within a range of + or - 0.005, requires 27 bits (I could be off by one). Therefore it should not be surprising that the result is off.

    If you use double (by removing the f from the numeric literal), you'll get 52 bits of precision, so the error would small enough that it wouldn't affect the result when you format to two decimal places.

    But using BigDecimal is better.

    This is not a Java or JDK issue, by the way. It will occur in any language where you're trying to use a 32-bit IEEE 754 float type. Please read What Every Computer Scientist Should Know About Floating-Point Arithmetic.