A software I am using encodes a set of floats as a hex string.
8e 3c 86 71 b8 25 ac bf 01 ab bc c4 08 69 aa ad 3f 01 a3 78 60 8c c6 fd c5 3f 01
I know the values accurate to two decimal places.
-.05, .06, .17
How would I go about figuring out the encoding?
This is IEEE 754 floating point, double precision (see here), little endian (see here), separated by 01
byte, probably some proprietary indicator for the data type.
Step 1: Extract 8-byte words
8e 3c 86 71 b8 25 ac bf
ab bc c4 08 69 aa ad 3f
a3 78 60 8c c6 fd c5 3f
Step 2: Optionally revert from little to big endian, depending on the converter you are going to use
bf ac 25 b8 71 86 3c 8e
3f ad aa 69 08 c4 bc ab
3f c5 fd c6 8c 60 78 a3
Step 3: Convert from binary double to human readable decimal format, here for testing done manually with online service baseconvert.com (64-bit hexadecimal input)
-0.05497528444095066413321859499774291180074214935302734375
0.057940752334951405033702798164085834287106990814208984375
0.1718071160730164359531357831656350754201412200927734375
How to decode unknown formats:
In this case I new that there are three floats. I counted the bytes and saw that there are 27, which is divisible by 3. So I split the bytes in 9 byte words and saw that there is the pattern of 8 bytes followed by 01
. 8-byte binary floats are well known, so I checked if the sample fits to the standard. Well and it did.
If you have no clue after simply watching at the data: