I was doing some benchmarking and found some absurd results that I cant seem to explain.
const a = undefined;
const b = {};
// add tests
suite
.add('Undefined variable', function () {
if(a > 0){
return true;
}
else{
return false;
}
})
.add('Undefined property', function () {
if(b.a > 0) {
return true;
}
else{
return false;
}
})
Test Results:
Undefined variable x 69,660,401 ops/sec ±2.48% (36 runs sampled)
Undefined property x 994,939,175 ops/sec ±0.85% (40 runs sampled)
Test Results
--------------------------------------------------------------------------
Undefined property : 994939174.67 ops/sec (+1328.27 %)
Undefined variable : 69660400.51 ops/sec ( +0.00 %)
--------------------------------------------------------------------------
Anyone have any idea why the first case Undefined variable
is so much slower than the other one ?
I found similiar performance results on a jsbench test : https://jsbench.me/vdku4ert4l/2
(V8 developer here.)
Comparing undefined > 0
always has the same performance. The difference here is that in one of your cases, V8 can optimize away the comparison: for a property access like b.a
, it remembers the hidden class of the object(s) seen (i.e. the values of b
); that's the key idea of the technique called "inline caching".
V8 takes this idea one step further: if all encountered objects had the same hidden class, and that hidden class didn't have an a
property, then when that function gets optimized, V8 takes that experience into account and produces optimized code that assumes that this will still be the case in the future, which in this case enables it to constant-fold away the property load and the comparison. In other words, it optimizes that function to something like:
function undefined_property_optimized() {
(if b.__hidden_class__ !== kPreviousHiddenClass) Deoptimize;
return false;
}
Where Deoptimize
means: throw away this optimized code and go back to unoptimized code for this function (resuming execution exactly at the right point, of course).
the first test
Undefined variable
is being heavily slowed down
No, it's not being slowed down at all. The other case is "cheating", so to speak.
adding
const b = { a: undefined };
doesn't change anything
That actually depends a lot on how exactly you run the test. In local testing, with minor modifications to what I force the engine to do, this addition either has no effect, or makes both functions have equal speed.
Rule of thumb #1: when you run a microbenchmark and you see several hundred million operations per second, then the optimizing compiler was able to optimize away pretty much everything, and you're testing an empty (or trivial) function.
Rule of thumb #2: the results of microbenchmarks are difficult to interpret correctly. You may have thought you were measuring property loads here, or > 0
comparisons; both assumptions would be incorrect: in the faster case, there are no properties being loaded and no > 0
comparisons being performed. To make sense of a microbenchmark, you really need to study the generated machine code (and/or other engine internals), to make sure it's testing what you think it's testing.
Rule of thumb #3: modern high-performance JavaScript engines are incredibly complex beasts, and the same snippet of JS will not always have the same performance; it depends heavily on the code around it (both the immediately surrounding lines, and far-away code elsewhere in your app can affect it).
Rule of thumb #4: the results of microbenchmarks almost never carry over to real-world code -- mostly because of the above three rules :-)
Side note: when you find yourself writing:
if (some_condition) {
return true;
} else {
return false;
}
then you can just replace that with return some_condition
. Probably won't be faster, but makes your code shorter.