Search code examples
javascriptperformancebrowserbenchmarking

What's the best way to determine at runtime if a browser is too slow to gracefully handle complex JavaScript/CSS?


I'm toying with the idea of progressively enabling/disabling JavaScript (and CSS) effects on a page - depending on how fast/slow the browser seems to be.

I'm specifically thinking about low-powered mobile devices and old desktop computers -- not just IE6 :-)

Are there any examples of this sort of thing being done?

What would be the best ways to measure this - accounting for things, like temporary slowdowns on busy CPUs?

Notes:

  • I'm not interested in browser/OS detection.
  • At the moment, I'm not interested in bandwidth measurements - only browser/cpu performance.
  • Things that might be interesting to measure:
    • Base JavaScript
    • DOM manipulation
    • DOM/CSS rendering
  • I'd like to do this in a way that affects the page's render-speed as little as possible.

BTW: In order to not confuse/irritate users with inconsistent behavior - this would, of course, require on-screen notifications to allow users to opt in/out of this whole performance-tuning process.

[Update: there's a related question that I missed: Disable JavaScript function based on user's computer's performance. Thanks Andrioid!]


Solution

  • Some Ideas:

    • Putting a time-limit on the tests seems like an obvious choice.
    • Storing test results in a cookie also seems obvious.
    • Poor test performance on a test could pause further scripts
      • and trigger display of a non-blocking prompt UI (like the save password prompts common in modern web browsers)
      • that asks the user if they want to opt into further scripting effects - and store the answer in a cookie.
      • while the user hasn't answered the prompt, then periodically repeat the tests and auto-accept the scripting prompt if consecutive tests finish faster than the first one.
        .
    • On a sidenote - Slow network speeds could also probably be tested
      • by timing the download of external resources (like the pages own CSS or JavaScript files)
      • and comparing that result with the JavaScript benchmark results.
      • this may be useful on sites relying on loads of XHR effects and/or heavy use of <img/>s.
        .
    • It seems that DOM rendering/manipulation benchmarks are difficult to perform before the page has started to render - and are thus likely to cause quite noticable delays for all users.