Search code examples
c#floating-pointcomparison-operators

Numeric data types comparison in C#


Could you please explain the results of the following code:

    float f = 1.56898138E+09f;
    double d = 1.56898138E+09;
    int i = 1568981320;

    bool a = f > i; //false 
    bool b = d > i; //true
    bool c = (int)f > i; //true

Why is a == false?


Solution

  • Well, int uses all 32 bits to store the integer value

     1568981320 == 1011101100001001100000101001000 (binary)
    

    when float uses 23 bits only with first is always 1 (https://en.wikipedia.org/wiki/Single-precision_floating-point_format), so the initial 1011101100001001100000101001000 should be rounded:

     1011101100001001100000101001000
                             ^ 
     ^                       from this on we should throw the "1001000" bits away
     |   
     this 1 can be skipped since float assumes that the 1st bit is always 1 
    

    So when rounding we should throw 1001000 away and add 1:

     1011101100001001100000101001000 - original value (1568981320)
    
     1011101100001001100000110000000 -  rounded value (1568981376)
      ^                     ^
      will be stored in float   
    

    and this is 1568981376 value which is bigger than original 1568981320