Search code examples
javascriptweb-audio-api

Timestamps for custom output-only AudioWorkletProcessor


I'd like to build a very low-level audio output node using the web audio API's AudioWorkletProcessor. It seems like all of the examples out there implement processors that transform input samples to output samples, but what I'd like to do is produce output only, all based on the timestamp of the samples.

According to MDN, BaseAudioContext.currentTime is not a precise source for this timestamp:

To offer protection against timing attacks and fingerprinting, the precision of audioCtx.currentTime might get rounded depending on browser settings. In Firefox, the privacy.reduceTimerPrecision preference is enabled by default and defaults to 20us in Firefox 59; in 60 it will be 2ms.

One hacky solution may be to use BaseAudioContext.sampleRate, a running counter, and the size of the output arrays, but that only works if we can assume that every sample is computed without drops and that everything's computed in order, and I'm not sure if those are valid assumptions.

Within a processing frame, is there a reliable way to know the timestamp which correlates to given sample index?


Solution

  • Inside of the AudioWorkletGlobalScope you have access to the currentTime as well as to the currentFrame. They are both available as globals.

    https://webaudio.github.io/web-audio-api/#AudioWorkletGlobalScope-attributes

    As far as I know they are accurate in any browser which supports the AudioWorklet.