I am writing program which takes floating number from input and outputs hex representation of this number.
What I did to solve it was:
My program is passing through every test there was on SPOJ forum. I had to look for manual test cases where it fails myself.
So in case of number -123123.2323 I received number:
(hex) c7 f0 79 9d
Binary whole=11110000011110011 Binary decimal=0011101101111000000000 ....
Mantissa=11100000111100110011101
Meanwhile https://www.h-schmidt.net/FloatConverter/IEEE754.html gives me:
Mantissa=11100000111100110011110
How does it work and why this way in that case? Using https://www.rapidtables.com/convert/number/decimal-to-binary.html?x=0.2323 when I convert 0.2323 to binary it gives me 0.0011101101111. I used this part to finish the mantissa after adding whole binary number (-1), up to 23 bits. What am I doing wrong?
Alright so it's not like my program or solution was wrong, just my thought process and I lacked crucial information about floating point numbers.
As Mr. "chux" said, the number I tried to compute (-123123.2323) should be read by computer as other number (-123123.2344). Since I used string for input and simply divided number into whole and decimal parts, I couldn't see floating point numbers limitations.
Solution is to read number as floating point number, let computer change it's value and then read it as string and work with that.