I'm currently trying to create an audio visualisation using an web audio api, namely I'm attempting to produce lissajou figures from a given audio source.
I came across this post, but I'm missing some preconditions. How can I get the time domain data for the left / right channels? Currently it seems I'm only getting the merged data.
Any help or hint would be much appreciated.
$(document).ready(function () {
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var audioElement = document.getElementById('audioElement');
var audioSrc = audioCtx.createMediaElementSource(audioElement);
var analyser = audioCtx.createAnalyser();
// Bind analyser to media element source.
audioSrc.connect(analyser);
audioSrc.connect(audioCtx.destination);
//var timeDomainData = new Uint8Array(analyser.frequencyBinCount);
var timeDomainData = new Uint8Array(200);
// loop and update time domain data array.
function renderChart() {
requestAnimationFrame(renderChart);
// Copy frequency data to timeDomainData array.
analyser.getByteTimeDomainData(timeDomainData);
// debugging: print to console
console.log(timeDomainData);
}
// Run the loop
renderChart();
});
The observation is correct, the waveform is the down-mixed result. From the current spec (my emphasis):
Copies the current down-mixed time-domain (waveform) data into the passed unsigned byte array. [...]
To get around this you could use a channel splitter (createChannelSplitter()
) and assign each channel to two separate analyzer nodes.
For more details on createChannelSplitter()
see this link.