I ran a test measuring the speed difference between the two accessor functions below, and the difference in time was bigger than I expected. I just had a feeling that the shorthand implementation might be a little faster, so I wanted to test it out.
I measured the total seconds required, to call the Get function for each class, for 1 billion iterations.
using System;
using System.Diagnostics;
class SimpleGet {
int value;
public int Get() {
return value;
}
}
class ShorthandGet {
int value;
public int Get() => value;
}
class Program {
static void Main() {
const int Iterations = 1000000000;
Stopwatch sw = new Stopwatch();
sw.Start();
{
int n; SimpleGet sg = new SimpleGet();
for (int i = 0; i < Iterations; i++) {
n = sg.Get();
}
}
sw.Stop();
Console.WriteLine("SimpleGet: " + sw.Elapsed.TotalSeconds);
sw.Reset();
sw.Start();
{
int n; ShorthandGet shg = new ShorthandGet();
for (int i = 0; i < Iterations; i++) {
n = shg.Get();
}
}
sw.Stop();
Console.WriteLine("ShorthandGet: " + sw.Elapsed.TotalSeconds);
Console.ReadLine();
}
}
The results:
// 1 billion iterations
SimpleGet: 11.8484244
ShorthandGet: 4.3218568
The difference in speed is huge. The only difference I could see is that the regular function has brackets, and therefore is creating a new scope on function call. Since there's no new variables inside the scope, shouldn't it theoretically be "disregarded" Can someone explain why the regular function isn't being optimized to the same level as the other?
Edit
I tested the same scenario with properties: Value { get { return value; } }
and Value => value;
and the time differences we're very close to the respective function time differences. I presume the cause is the same.
The short answer is that there is no difference in a properly done benchmark.
For micro-optimization cases like these I always like to peek at the IL first. Not because you'll get some deep insight, but because if identical IL is generated then there shouldn't be a difference at run time. The next thing to remember is that you've got to start from a Release build because the compiler will remove unnecessary IL instructions in those builds.
In a Debug build, the long form IL (SimpleGet) has additional instructions to enable placing break points:
.method public hidebysig
instance int32 Get () cil managed
{
IL_0000: nop
IL_0001: ldarg.0
IL_0002: ldfld int32 ConsoleApplication7.SimpleGet::'value'
IL_0007: stloc.0
IL_0008: br.s IL_000a
IL_000a: ldloc.0
IL_000b: ret
}
vs the much shorter for ShorthandGet:
.method public hidebysig
instance int32 Get () cil managed
{
IL_0000: ldarg.0
IL_0001: ldfld int32 ConsoleApplication7.ShorthandGet::'value'
IL_0006: ret
}
However in an optimized build, both forms result in the same IL that is identical to ShorthandGet above.
Benchmarks of Debug builds might show differences as you've demonstrated, but these are never worth comparing since you will be running the optimized code of a Release build if you care about performance. Moral of the story is to always do your performance analysis on optimized code. An additional item that is often missed is to benchmark without the debugger attached since even for optimized IL, the JIT will detect the debugger and emit more debuggable machine code. This is missed by many people because they just click "Start" or hit F5 in VS, but this starts the program with the debugger attached. Use the menu option Debug > Start Without Debugging.