I am being asked in a homework to represent the decimal 0.1 in IEEE 754 representation. Here are the steps I made:
However online converters, and this answer on stack exchange suggests otherwise. They put this solution:
s eeeeeeee mmmmmmmmmmmmmmmmmmmmmmm
0 01111011 10011001100110011001101
The difference is the number 1 at the right. Why isn't it 1100, why is it 1101?
As njuffa said in a comment, rounding is the explanation for the difference you see. Converters usually produce the nearest floating-point value to the decimal number you put in. The IEEE 754 standard recommends that the rounding mode be taken into account for conversions from one base to another (such as from decimal to binary), and the default rounding mode is “to nearest”.
The two closest single-precision floating-point values to 1/10 are 1.10011001100110011001100×2-4 and 1.10011001100110011001101×2-4 (below and above 1/10). The digits that are cut off are “11001100…”, indicating that the real 1/10 is closer to the upper bound than to the lower bound(if the remaining digits had been “100000000…”, the real number would have been exactly in-between the two). For this reason, the upper value 1.10011001100110011001101×2-4 is chosen as the conversion of 1/10 to binary32 when converting in round-to-nearest mode.