The Web Audio API examples show a source stream going directly to an audio context destination node, like the speakers or canvas. I want to collect data from the analyser, and then render a React component with it. The best I came up with is to interval poll:
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
mediaRecorder = new MediaRecorder(stream);
// start button click
mediaRecorder.start();
analyserPoll = setInterval(() => {
analyser.getFloatTimeDomainData(buffer);
collect = [...collect, buffer];
}, 500);
// stop button click
mediaRecorder.stop();
clearInterval(analyserPoll);
Is there a more official way of doing this inside the API without the setTimeout? For instance, saving to a Blob or file, and then running my analyser code on that? It's a McLeod pitch detector.
AudioWorklets are not required. What's required is proper design of state and effects. Create an effectful action that set states for what you don't want to render, like a buffer:
useEffect(() => {
// getFloatTimeDomainData
// setBufferState
}, //run effect when these deps change, buffer);
Then, after the audio api collection is over, like when the user hits a "Stop button" set the buffer to the data you want to render.
setData(buffer)
The effect won't render as you leave buffer alone. It's helpful when for expensive components and collecting data.
There are a ton of edge cases. User needs to gesture before audio api can be used. Audio Worklets are streams, so there's no real way to persist data. Sending messages on a port result in the same thing but more complicated.