Search code examples
javascriptoptimizationv8destructuringmicro-optimization

Why is destructuring an array in javascript slower than for an object?


I have ran a few different variations of this, but this is the basic test that I have made on jsbench.me:

https://jsbench.me/j2klgojvih/1

This initial benchmark has an obvious initial optimization that makes the object destructure significantly faster. If you move the declaration of t into each test block, that underlying optimization disappears, but the array destructure still loses.

The test is a simple concept represented by:

const t = [1, 2, 3];

// Test 1 (Slower)
const [x, y, z] = t;

// Test 2 (Faster)
const {0: x, 1: y, 2: z} = t;

I would think V8 (or any JS engine) could/should run the array destructuring faster; however, I have not been able to make a variation of the test where that is the case.

If I were to poke a guess at the reasoning, it'd be that array destructuring runs some iterator to loop through the array.


Solution

  • (V8 developer here.)

    If I were to poke a guess at the reasoning, it'd be that array destructuring runs some iterator to loop through the array.

    Yup. The spec pretty much demands it that way.

    WHY would you iterate for statically known properties?

    In JavaScript, significantly fewer things are "statically known" than it might seem at first. And even if they're statically derivable in a microbenchmark, that might not be enough reason to optimize for them, because real-world code tends to be a lot more complicated.

    I am definitely asking this for the purpose of micro-optimization.

    Be aware that microbenchmarks are usually misleading, even for micro-optimizations. If you real use-case is different from the benchmark, then the benchmark's results are very likely not going to be representative, and as such may well lead you to wasting time on things that don't help or are even counter-productive.

    In this particular case, I have no reason to doubt that array destructuring will likely be somewhat slower than object destructuring regardless of circumstances; but the relative difference and hence whether it matters depend a lot on the situation (factors such as: function size, call count, inlineability, are the results used or ignored, are the inputs constant or changing, ...).

    So, I'm looking to see if this is likely to remain steady for a long time, or if it's something just not addressed yet.

    I don't know whether there is much untapped performance potential in array destructuring, nor whether/when someone might look into it.

    It's not designed to be incredibly performant

    Oh, yes, it is; and we keep working hard to make it even more performant.