Search code examples
javascriptaudiocontext

JS AudioContext Interface: Am I doing this right?


I have the following function:

var PE_AudioManager_playSe = AudioManager.playSe;
AudioManager.playSe = function(se) {

    if (se.name.substring(0,5) === `data:`) {

        let audioContext = new (window.AudioContext || window.webkitAudioContext)();

        let gainNode = audioContext.createGain();
        gainNode.gain.value = (se.volume / 100) || 0;

        let panNode = audioContext.createStereoPanner();
        panNode.pan.value = (se.pan / 100) || 0;

        let source = audioContext.createBufferSource();
        audioContext.decodeAudioData(se.name.split(`,`)[1].base64ToArrayBuffer(), function(buffer) {
            source.buffer = buffer;
            source.connect(gainNode);
            source.connect(panNode);
            source.connect(audioContext.destination);
            source.detune.value = (se.pitch - 100);
            source.start(0);
         });

    } else {

        PE_AudioManager_playSe.call(this,se);
    };

};

It is an alias for an existing function, that handles the playing of audio sound effects. This alias "intercepts" the routine and uses the AudioContext interface to play the sound if the source object's .name property is a data URI / base64 rather than a filename.

The sound effect plays without problem, except I don't think I am doing the panning (.createStereoPanner) or volume (.createGain) correctly- I don't think I hear a difference if I adjust the pan or volume. But I could be wrong / crazy.

Does this code look correct? Can anybody point me in the right direction? Thank you in advance.


Solution

  • The Gain- and PannerNodes have min and max values. Control your input so that those ranges are honored. But the problem lies elsewhere.

    const ctx = new AudioContext();
    const gainNode = ctx.createGain();
    const panNode = ctx.createStereoPanner();
    
    console.log(gainNode.gain.minValue, gainNode.gain.maxValue);
    console.log(panNode.pan.minValue, panNode.pan.maxValue);

    The connection of the nodes is critical. What helps for me is to look at it like it is a guitar (or any other electrical instrument) with wires that have to be connected. One wire goes from the guitar to the gain pedal, that wire goes to the pan pedal and that wire goes to the amp to output the signal.

    Same goes for your nodes. Connect the source (guitar) to the gainNode (gain pedal) then the gainNode to the panNode (pan pedal) and the panNode to the audioContext.destination (the amp).

    audioContext.decodeAudioData(se.name.split(`,`)[1].base64ToArrayBuffer(), function(buffer) {
        source.buffer = buffer;
        source.connect(gainNode);
        gainNode.connect(panNode);
        panNode.connect(audioContext.destination);
        source.detune.value = (se.pitch - 100);
        source.start(0);
    });
    

    Really try to visualize it like that. Maybe even draw it on paper if you will make it more complex.

    Multiple nodes can be connected to a single destination. Like having multiple sources which flow through the same effects to the destination. You can even make a switchboard out of this by connecting and disconnecting your nodes to and from different destinations, depending on what you need.

    Hope this helps. If you have any question or I have been unclear, please let me know.