What are the maximum and minimum precision of a denormalized 64 bit floating point number following IEEE 754-2008? That is, what's the precision of a double at 2^-1022 and 2^-1074 respectively?
This question is similar, but it does not care about the actual numbers.
The precision of denormalized double precision floating point numbers gradually vanishes from 52 bits to 1 bit.
Hence, the mechanism is called gradual underflow.