I'm wondering how the max value is represented in 64 bits of double precision floating point. I assume it's represented with all 1's in exponent and mantissa like so:
0 11111111111 1111111111111111111111111111111111111111111111111111
If so, then why Number.MAX_VALUE = 1.7976931348623157e+308
shows the exponent of 308
, instead of 1024
decoded from 11111111111
? Is the bits pattern different?
308 is decimal exponent while double uses powers of two.
Secondly max value exponent is reserved for infinity and NaN. Max-1 is the max exponent for regular number. Therefore the max number is written as:
01111111 11101111 11111111 11111111 11111111 11111111 11111111 11111111
I.e. 0
sign bit, 2046
as exponent value (note that 1023
means actually zero exponent so 2046
means exponent of 1023
) and mantissa all ones (1.11111(52 times)
in binary, the first one is hidden), in other words 1.11111(52 times)*2^1023
.
Converted to decimal it's (2-(2^-52))*2^1023
which is about 1.79769313486231*10^308
.
For the double/float formats you can find very precise information on wiki.