So I have been trying to wrap by head around the relation between the number of significant digits in a floating point number and the relative loss of precision, but I just can't seem to make sense of it. I was reading an article earlier that said to do the following:
So why is this 128 when there are 10 significant digits? I understand how floats are stored (1 bit for sign, 8 bits for exponent, 23 bits for mantissa) and understand how you will lose precision if you assume that all integers will automatically find exact homes in a float data structure, but I don't understand where the 128 comes from. My intuition tells me that I'm on the right track, but I'm hoping that someone may be able to clear this up for me.
I initially thought that the distance between possible floats was 2 ^ (n-1) where n was the number of significant digits, but this did not hold true.
Thank you!
The "distance" between two adjacent floating point numbers is 2^(1-n+e), where e is the true exponent and n the number of bits in the mantissa (AKA significand). The exponent stored is not the true exponent, it has a bias. For IEEE-754 floats, this is 127 (for normalized numbers). So, as Peter O said, the distance depends on the exponent.