Search code examples
web-componentshadow-dom

Is it good or bad for browser performance to have many Shadow DOMs?


Shadow DOMs allow us to create independent DOM trees inside our documents that have their own node tree, (more or less) isolated style management, and, in a way, only get "rendered" into the parent DOM tree.

I am wondering about the performance implications at large scale. Is it good or bad to have many Shadow DOMs / Shadow roots on a page, as opposed to having everything in one large document?

On the one hand, I guess, browsers might benefit from smaller (sub-) DOM trees and the fewer style rules that they have to evaluate when they render the contents of an isolated Shadow DOM that contains only the nodes and the styles that are actually relevant for its content. This might have a positive impact on computation effort.

On the other hand, will the effort for the additional "document-like" metadata or the "merging" of the DOM trees at render time slow the browser down or significantly increase the memory usage?


Solution

  • Update June 2022

    A DOM tree is still a DOM tree with or without Shadow DOM. The notion of isolated DOM tree doesn't really exist at browser level. It is only the conceptual model. Even with Shadow DOM, browser still has to manage styles/CSS properties that cross the Shadow DOM boundaries.

    Rendering a page is a multi-stage process for browser. Having Shadow DOM on the page will certainly be better for style calculations. However the big chuck of browser work is repaint, reflow and JavaScript execution which would still happen with same amount of work. So, the actual impact with/without Shadow DOM is theoretical.

    Having said that, irrespective of using Shadow DOM (or not), there are two important things that are important from performance standpoint. First is minimizing DOM access by avoiding unnecessary changes and batching DOM updates (not reading DOM props likes width, height in between those updates which force layout recalculations). Second is using decent Class or ID selectors which are simply more efficient than plain element selectors.

    Continuing the same point, you can actually look at CSS Containment which is actually a specification to improve the performance of web pages by allowing developers to isolate a sub-tree of the page from the rest of the page. You can use it with or without Shadow DOM. There was some talk of combining CSS Containment with Shadow DOM but nothing really happened afterwards.

    So in nutshell, Shadow DOM is simply about providing an isolated (public/private APIs) component model for web applications. Any performance is simply a side-effect as part of browser implementation. Whereas, CSS Containment is an official specification to provide performance hints to the browser.

    Original answer

    You are caught into Premature Optimization loop. Million Nodes with or without ShadowDOM does not make a difference.

    And, if you want to think about performance, then worry about:

    1. Minimizing DOM access - Use Virtual DOM or Incremental DOM
    2. Avoiding operations causing browser Re-flows in critical loops
    3. Avoiding Heavy computations on UI/main thread