Search code examples
javascriptperformanceiteratorsetv8

How to iterate a Javascript Map or Object without allocating?


I'm developing a game in Javascript and that involves running the render loop as quickly as possible. I've noticed GC (Garbage Collector) spikes interrupt the smooth frame-rate and after profiling I've discovered that my entity querying system is generating a ton of garbage.

In doing so, I've found that iterators cause allocations in Javascript. In one section of my code I'm doing something like this:

let minimumSet = this.getSmallestEntitySet(componentTypes);

for (let entity of minimumSet) {
  // handle entity
}

Unfortunately just doing a for of loop causes allocations (because it creates a new iterator object every time the loop is run), and since this is run so often, a ton of garbage is generated. I was looking around and I couldn't find out whether or not there was a way to iterate a Set, Map, or Object without performing allocations. For example, with a Javascript array you can iterate it while avoiding allocations like this:

for (let i = 0; i < arr.length; i++) {
  let item = arr[i];

  // handle item
}

But I couldn't find a way to do iterate Map or Set like this with just a regular for loop. Is it possible?

Assuming this is not possible, I assume the work-around for this is to instead use a sorted array instead of a map, right? The only issue with that is that insertion and deletion is O(log n) instead of O(1), but it might be worth it I suppose just to avoid allocations.

Is that my best option is there a way to iterate these collections without performing allocations?


Solution

  • (V8 developer here.)

    As the question you've linked to points out, JavaScript engines (at least V8) do optimize away the temporary objects that the spec text for the iteration protocol talks about; though the full answer (as so often) is "it depends".

    One effect is that optimizing things away is something that optimizing compilers do, and those don't kick in right away (because more often than not, that would be a waste of effort). So if you only run a function a few times, you'll see its unoptimized version allocate short-lived garbage. But that'll change soon (specifics depend on the engine), as soon as the function is deemed "hot" and gets optimized.

    Another issue you might be running into is that iterating over a Map directly (as in: let map = new Map(); ...; for (let e of map) {...} is specified to return e as an array [key, value] every time. This array is not optimized away; it has to be allocated on every iteration. But if you're only interested in processing the values anyway, you can avoid it being created by iterating over just the values: for (let v of map.values()) {...} does not allocate any short-lived objects. The same is true for iterating over map.keys().

    If you need both key and value, you could combine an iteration over the keys with a map lookup: for (let key of map.keys()) { let value = map.get(key); ...}, however this is quite a bit slower than iterating over the values. If your objects implement interface Entry as in your answer, i.e. they have a property carrying their key, then you can use that instead: for (let value of map.values()) { let key = value.id; ...}.


    All that said: if the SparseSet solution works for you, then of course that's cool too. You can even make it a little more efficient: if you change add to add(item: T) { let key = item.id; ...} and update delete to include this.sparse.delete(key), then the set itself can guarantee that its internal data is always consistent, and then contains can be as simple as return this.sparse.get(key) !== undefined;.

    I assume the work-around for this is to instead use a sorted array instead of a map, right? The only issue with that is that insertion and deletion is O(log n)

    Insertion and deletion on sorted arrays are O(n), because you may have to shift the entire contents around. (Looking up an element by its sorted key is what takes O(log n) if you use binary search for that.) However, using unsorted arrays can be faster than one might expect intuitively, as long as they remain small-ish (a couple dozen entries or so). Something like:

    class UnsortedArray<T extends Entry> {
      contents: Array<T> = [];
    
      add(item: T) { contents.push(T); }  // optional: check for duplicates first
      private find(key: number) {
        for (let i = 0; i < contents.length; i++) {
          if (contents[i].id == key) return i;
        }
        return -1;
      }
      size() { return contents.length; }
      get_by_index(index: number) { return contents[index]; }
      get_by_key(key: number) { return contents[find(key)]; }
      has(item: T) { return find(T.id) >= 0; }
      delete(item: T) {
        let index = find(T.id);
        if (index < 0) return;
        let last = contents.pop();
        if (index !== contents.length) contents[index] = last;
      }
    }
    

    That gives you insertion in O(1), classic iteration without overhead (for (let i = 0; i < unsorted_array.size(); i++) { let entry = unsorted_array.get_by_index(i); ...}), deletion and has in O(n). I expect has will only actually be slower than doing binary search on an ordered array once you exceed 30-50 elements; and of course it depends on your use case whether has performance matters at all.