I know that in IEEE 754-2008 64-bit binary (radix 2) format the largest Decimal Floating Point Number that can be represented is 1.7976931348623157E+308 and the smallest is 4.94065645841246544E-325. So the values that results from converting this to hex is not the largest and smallest Hex Floating Point value?
The largest finite value has an exponent of 11111111110
(because the largest one is reserved for INFs and NaNs) which means , representing 22046 - 1023 = 2+1023 , and a significand of all ones:
0 11111111110 11111111111111111111111111111111111111111111111111112
which is 0x7FEFFFFFFFFFFFFF in raw hex form and 0x1.fffffffffffffp+1023
. You can check this with float.exposed or floating-point-converter
It can also be calculated like this: the significand of binary64 contains 52 explicit bits and a hidden 1 bit, so the largest significand is 1.fffffffffffff
. There are 11 exponent bits, allowing for the largest exponent of 211 - 1 - 1023 - 1 = 1023. That means the largest value is 0x1.fffffffffffffp+1023
Similarly the smallest non-zero normalized number has a exponent pattern of 00000000001
, representing 21 - 1023 = 2-1022 and significand = 0, so
0 00000000001 00000000000000000000000000000000000000000000000000002 = 0x0000000000000001 = 0x1p-1022
The smallest non-zero subnormal number has exponent of all zeros and significand = 1:
0 00000000000 00000000000000000000000000000000000000000000000000012 = 0x0010000000000000 = 0x1p-1074