I recently did some profiling on some code and found that the largest CPU usage was being consumed by calls to BitConverter such as:
return BitConverter.ToInt16(new byte[] { byte1, byte2 });
when switching to something like:
return (short)(byte1 << 8 | byte2);
I noticed a huge improvement in performance.
My question is why is using BitConverter so much slower? I would have assumed that BitConverter was essentially doing the same kind of bit shifting internally.
The call to BitConverter
involves the allocation and initialisation of a new object. And then a method call. And inside the method call is parameter validation.
The bitwise operations can be compiled right down to a handful of CPU opcodes to do a shift followed by the or.
The latter will surely be faster because it removes all of the overhead of the former.