Search code examples
c#c++castingbit-manipulationd

Is there a difference in the ways of converting from integer to integer?


Is there a difference when I want to convert an integer(for example 32-bit integer/int) to another integer type(for example 8-bit integer/byte). Here is some example code for the two ways I can convert it:

byte foo(int value)
{
    //return value; <-- this causes problems because I need to convert it to byte

   //First way(most people use this):
   return (byte)value; //this involves casting the value and also works if value is floating point type

   //Second way:
   return value & byte.MaxValue; //byte.MaxValue is a constant that is 255
}

So is there any difference between the two? I know that bitwise operations only work for integer types. I know that the second way is not quite readable or recommended. Except that is there any difference of the output of both ways. That is not only for int and byte, but for every integer-integer type combination.

Ok so it seems that this operation has a different behaviour in different languages. I wan't to see the differences, so please post answers for C++/C#/D.

Also I forgot I meant unsigned integers only(not signed). So it is for all unsigned integer types.


Solution

  • In C#, casting an int to a byte will throw an exception if it is out of range and within a checked context. Otherwise, casting acts pretty much like C++ does.

    The type promotion works in C# like it does in C++ (as described by Mark B).

    For comparison, look at the IL generated by these three methods:

    byte foo1(uint value)
    {
        return (byte) value;
    }
    
    .method private hidebysig instance uint8 foo1(int32 'value') cil managed
    {
        .maxstack 8
        L_0000: ldarg.1 
        L_0001: conv.u1 
        L_0002: ret 
    }
    

    Versus

    byte foo2(uint value)
    {
        checked
        {
            return (byte)value;
        }
    }
    
    .method private hidebysig instance uint8 foo2(uint32 'value') cil managed
    {
        .maxstack 8
        L_0000: ldarg.1 
        L_0001: conv.ovf.u1.un 
        L_0002: ret 
    }
    

    And for the ANDing:

    byte foo3(int value)
    {
        return (byte)(value & byte.MaxValue);
    }
    
    .method private hidebysig instance uint8 foo3(uint32 'value') cil managed
    {
        .maxstack 8
        L_0000: ldarg.1 
        L_0001: ldc.i4 255
        L_0006: and 
        L_0007: conv.u1 
        L_0008: ret 
    }
    

    This again uses conv.u1, like the first method, so all it does is introduce the overhead of anding off the extra bits that are ignored by the conv.u1 instruction anyway.

    Therefore in C# I would just use the casting if you don't care about range checking.

    One interesting thing is that in C#, this will give you a compiler error:

    Trace.Assert(((byte)256) == 0); // Compiler knows 256 is out of range.
    

    This won't give a compile error:

    int value = 256;
    Trace.Assert(((byte)value) == 0); // Compiler doesn't care.
    

    And of course this won't give a compile error either:

    unchecked
    {
        Trace.Assert(((byte)256) == 0);
    }
    

    It's odd that the first one gives a compiler error even though by default it's unchecked at runtime. I guess compile-time is checked by default!