Search code examples
javascriptinheritancev8mixins

Is a deep inheritance chain slowing down method lookup in V8 JavaScript engine?


I'm writing a base class for a game in TypeScript. It got functionality like sending messages, resources management, etc. Inspired by Mixins, I wrote the following code(compiled to JavaScript):

function Messenger(Base) {
    return class Messenger extends Base {
        $dispatch(e) {
           // TODO
        }
    };
}
function ResourceManager(Base) {
    return class ResourceManager extends Base {
        $loadRes(key) {
            // TODO
            return Promise.resolve({});
        }
    };
}
class Component {
}
class GameBase extends Component {
    start() {
        console.log('start');
    }
    init() {
        console.log('init');
    }
}
const Klass = ResourceManager(Messenger(GameBase));
var gg = new Klass();
gg.start();

As far as I know, when I try to call gg.start, the JavaScript engine lookup the prototype chain, and it a little bit longer in this case and it becomes event long when the mixins grow: method lookup

Is this slowing down the method lookup? Is V8 optimized this looking-up process and can I just ignore the lookup overhead?


Solution

  • V8 developer here. This is a complex issue; the short answer is "it depends".

    It is trivially true that having to walk a longer prototype chain when doing a lookup takes more time. However, if that's done only once or twice, then that time is typically too short to matter.

    So the next question is: how often will such lookups be performed? V8 tries to cache lookup results whenever it can (search for the term "inline caches" if you want to know more); the effectiveness of such caching, as all caching, critically depends on the number of different cases seen.

    So if your code is mostly "monomorphic" (i.e. at any given foo.bar lookup, foo will always have the same type/shape, including same prototype chain), or low-degree polymorphic (up to four different types of foo), then the full prototype chain walk only needs to be done once (or up to four times, respectively), and after that the cached results will be used, so if you execute such code thousands of times, you won't see a performance difference between prototype chains that are one step or hundreds of steps long.

    On the other hand, if you have property loads or stores that see many different types (as tends to happen in certain frameworks, where every single lookup goes through some central getProperty(object, property) { /* do some framework stuff, and then: */ return object[property]; } function), then caching becomes useless, and V8 has to perform the full lookup every time. This is particularly slow with long prototype chains, but that said it is always much slower than cacheable cases (even with short prototype chains).

    In conclusion, if you're somewhat careful about your overall program design and avoid having many different types at the same code locations, then you can easily afford very long prototype chains. In fact, keeping as much of your code monomorphically typed as possible tends to have significantly more impact than keeping prototype chain lengths short. On the other hand, shorter prototype chain lengths do make the engine's life easier, and personally I'd argue that they can (if you don't overdo it) also improve readability, so all else being equal, I'd suggest to keep your object model as simple as you can.