Search code examples
c#performancemicro-optimizationnumeric-conversionrange-checking

Is it more efficient to perform a range check by casting to uint instead of checking for negative values?


I stumbled upon this piece of code in .NET's List source code:

// Following trick can reduce the range check by one
if ((uint) index >= (uint)_size) {
  ThrowHelper.ThrowArgumentOutOfRangeException();
}

Apparently this is more efficient (?) than if (index < 0 || index >= _size)

I am curious about the rationale behind the trick. Is a single branch instruction really more expensive than two conversions to uint? Or is there some other optimization going on that will make this code faster than an additional numeric comparison?

To address the elephant in the room: yes, this is micro optimization, no I do not intend to use this everywhere in my code – I'm just curious ;)


Solution

  • From MS Partition I, section 12.1 (Supported data types):

    The signed integer types (int8, int16, int32, int64, and native int) and their corresponding unsigned integer types (unsigned int8, unsigned int16, unsigned int32, unsigned int64, and native unsigned int) differ only in how the bits of the integer are interpreted. For those operations in which an unsigned integer is treated differently from a signed integer (e.g., in comparisons or arithmetic with overflow) there are separate instructions for treating an integer as unsigned (e.g., cgt.un and add.ovf.un).

    That is, the conversion from an int to a uint is merely a matter of book-keeping - from now on, the value on the stack/in a register is now known to be an unsigned int rather than an int.

    So the two conversions should be "free" once the code is JITted, and then the unsigned comparison operation can be performed.