Search code examples
audiowaveformsynthesis

Audio synthesis best practices


I want to program a music program from scratch. Big goal: yes. I have no clear intention of finishing anything. This is mainly a personal project for learning. :P

The first step is building the oscillators and instruments. An instrument will probably be a combination of oscillators and filters (and envelopes + effects). Now, my first question is: How should I build the wave generators?

Imagine I have a track that plays different notes with instrument X. I imagine it's best to "pre-render" these notes. So I would pay an up-front cost to run my wave functions to generate an array of numbers that represent a wave. Say I want to do this at a sample rate of 44.1KHz, does that mean I'll have an array of 44.1k items per second of sound per instrument?

I think this question itself is language agnostic. But I'm planning on using JavaScript because I'll run this in a browser.


Solution

  • Audio is just a curve - so to build your oscillator you come up this an algo to output a curve. Software, being digital not analog, demands the curve be defined as a series of points in time (samples) where its value is the instantaneous height of the audio curve. Typically these samples happen at 44100 times per second ie. Hertz.

    checkout Web Audio API - its surprisingly powerful and very will supported. Just to get an appreciation of its flexibility checkout this demo written by a google staffer insider

    Web Audio Playground
    http://webaudioplayground.appspot.com/
    

    amongst other audio widgets, it offers black box oscillators, yet allows you to roll your own and render your synthesized or file based audio data in real-time. Its modular so each component is called a node - you build by linking these nodes

    Here is the definition of a callback used to synthesize audio (oscillator)

    function setup_onaudioprocess_callback(given_node) {
    
        given_node.onaudioprocess = (function() {
    
            return function(event) {
    
                if (allow_synth) {
    
                    // console.log('inside main_glob callback   onaudioprocess   BUFF_SIZE ', BUFF_SIZE);
    
                    var synthesized_output_buffer;
    
                    // stens TODO - how to pass in own buffer instead of being given object: out so I can do a circular ring of such buffers
    
                    synthesized_output_buffer = event.outputBuffer.getChannelData(0); // stens TODO - do both channels not just left
    
                    var phi = 0,
                        dphi = 2.0 * Math.PI * given_node.sample_freq /
                        given_node.sample_rate;
    
                    for (var curr_sample = 0; curr_sample < given_node.BUFF_SIZE; curr_sample++, phi += dphi) {
    
                        synthesized_output_buffer[curr_sample] = Math.sin(phi);
                    }
    
                    given_node.sample_freq *= given_node.freq_factor;
    
                    if (given_node.sample_freq <
                        given_node.MIN_FREQ) {
    
                        given_node.freq_factor = given_node.increasing_freq_factor;
    
                    } else if (given_node.sample_freq > given_node.MAX_FREQ) {
    
                        given_node.freq_factor = given_node.decreasing_freq_factor;
                    }
    
                    // ---
    
                    audio_display_obj.pipeline_buffer_for_time_domain_cylinder(synthesized_output_buffer,
                        BUFF_SIZE, "providence_2");
                }
            };
    
        }());
    }
    

    it would be used in relation to a node generated using createScriptProcessor

    function init_synth_settings(given_node, g_MIN_FREQ, g_MAX_FREQ, g_BUFF_SIZE, g_decreasing_freq_factor, g_increasing_freq_factor) {
    
        given_node.MIN_FREQ = g_MIN_FREQ;
        given_node.MAX_FREQ = g_MAX_FREQ;
    
        given_node.sample_freq = given_node.MIN_FREQ; // Hertz
        given_node.BUFF_SIZE = g_BUFF_SIZE;
    
        given_node.decreasing_freq_factor = g_decreasing_freq_factor;
        given_node.increasing_freq_factor = g_increasing_freq_factor;
        given_node.freq_factor = g_increasing_freq_factor;
    }
    
    var this_glob_01 = audio_context.createScriptProcessor(BUFF_SIZE, 1, 1);
    
    init_synth_settings(this_glob_01, 20, 300, BUFF_SIZE, 0.98, 1.01);
    
    setup_onaudioprocess_callback(this_glob_01);
    

    this should get you over the hump