Search code examples
javascriptgoogle-chromev8profiler

Javascript Profiler in Chrome 79 for Windows: What does self-time include?


Running in-browser Javascript in Chrome 79 for Windows :: From similar threads, it sounds like self-time includes only time to run the in-line code within a particular function, and excludes any time spent running sub functions.

But, in practice, I have noticed that some functions in my app that make a lot of sub-calls seem to have an inordinate amount of self-time compared to other similarly-sized functions with little or no sub-calls. (i.e. I'm comparing two functions with relatively similar operations, as well as number of ops). The self-time of these two functions can vary by 10x.

I'm wondering if self-time includes time to prepare for those calls, etc?

Perhaps, it's possible that some of that higher self-time of the function with sub-calls is due to later optimization by V8 and therefore during the sample time of the profiler, I'm comparing the self-time of an optimized function vs a not-yet-optimized function, which could run 100x slower prior to optimization. Maybe this is the culprit?


Solution

  • self-time includes only time to run the in-line code within a particular function, and excludes any time spent running sub functions

    Yes, "self time" is the number of tick samples that occurred in the given function.

    I'm wondering if self-time includes time to prepare for those calls, etc?

    "time to prepare calls" is not measured separately.

    I have noticed that some functions in my app that make a lot of sub-calls seem to have an inordinate amount of self-time

    I would guess that what you're observing is caused by inlining. When a function gets optimized, and the compiler decides to inline one or more called functions, then the profiler afterwards can't possibly distinguish which instructions originally came from where (part of the reason why inlining can be beneficial is because it can allow elimination of redundancies, which naturally blurs the lines of which original function a given instruction "belongs to"). Does that make sense?

    If you want to exclude the effects of inlining when profiling, you can turn off inlining. In Node/V8, you can run with --noturbo-inlining. (FWIW, in C/C++ this is true as well, where GCC/Clang understand -fno-inline.) Note that turning off inlining changes the performance characteristics of your app, so it can yield misleading results (specifically: it could be that without inlining you'll observe a performance issue that simply goes away when inlining is turned on); but it can also be helpful for pinpointing what is slow.