Search code examples
floating-pointfloating-point-precisionepsilon

Machine precision estimations


Some people say that machine epsilon for double precision floating point numbers is 2^-53 and other (more commonly) say its 2^-52. I have messed around estimating machine precision using integers besides 1 and aproaching from above and below (in matlab), and have gotten both values as results. Why is it that both values can be observed in practice? I thought that it should always produce an epsilon around 2^-52.


Solution

  • There's an inherent ambiguity about the term "machine epsilon", so to fix this, it is commonly defined to be the difference between 1 and the next bigger representable number. (This number is actually (and not by accident) obtained by literally incrementing the binary representation by one.)

    The IEEE754 64-bit float has 52 explicit mantissa bits, so 53 including the implicit leading 1. So the two consecutive numbers are:

    1.0000  .....  0000
    1.0000  .....  0001
      \-- 52 digits --/
    

    So the difference betwen the two is 2-52.