In the .NET CLR, I'm curious to know how the different ways of evaluating boolean logic work under the hood.
Generally (I'm not asking for a comprehensive list of each modern CPU arhitecture's machine language instruction set), how does the .NET CLR "optimize" each of these boolean comparisons, and at the end of the day, is the CPU using different instructions to evaluate these seemingly identical comparisons?
== TRUE
:
-eq $true
vs -ne $false
== FALSE
:
(-not (<statement>))
vs -eq $false
vs -ne $true
Charlieface has provided the crucial pointer:
sharplab.io is a great site that allows you to inspect what a given snippet of C# / F# / Visual Basic code compiles to in terms of IL (or JIT ASM)
Using it, you can observe the following, based on the following C# code:
public class C {
bool b = false;
int dummy = 0;
public void M() {
// equivalent positive tests
if(b) { ++dummy; }
if(b == true) { ++dummy; }
if(b != false) { ++dummy; }
// equivalent negative tests
if(!b) { ++dummy; }
if(b == false) { ++dummy; }
if(b != true) { ++dummy; }
}
}
The equivalent positive tests indeed compile to the very same IL code (in principle, abstracted below):
ldarg.0
ldfld bool C::b
brfalse.s <target-statement>
Likewise, the equivalent negative tests all compile to:
ldarg.0
ldfld bool C::b
brtrue.s <target-statement>
As an aside: Note how the logic is reversed: testing for (effective) true
results in a brfalse.s
instruction, i.e. where to jump to if the test is not true, and vice versa (brtrue.s
)
To experiment with the results yourself:
Use this link.
Via the Results
dropdown list in the pane to the right, you can also ask for your input code to be decompiled to C# (irrespective of the original input language), which corroborates the above findings:
The equivalent positive tests all decompile to:
if (b) ...
The equivalent negative tests all decompile to:
if (!b) ...