Search code examples
core-audioaudiounitaudiotoolboxavaudioenginesuperpowered

AudioUnit output buffer and input buffer


My question is what should I do when I use real-time time stretch? I understand that the change of rate will change the count of samples for output. For example, if I stretch audio with 2.0 coefficient, the output buffer is bigger (twice).

So, what should I do if I create reverb, delay or real-time time stretch?

For example, my input buffer is 1024 samples. Then I stretch audio with 2.0 coefficient. Now my Buffer is 2048 samples.

In this code with superpowered audio stretch, everything is work. But if I do not change the rate... When I change rate - it sounds with distortion without actual change of speed.

return ^AUAudioUnitStatus(AudioUnitRenderActionFlags    *actionFlags,
                              const AudioTimeStamp      *timestamp,
                              AVAudioFrameCount         frameCount,
                              NSInteger             outputBusNumber,
                              AudioBufferList           *outputBufferListPtr,
                              const AURenderEvent       *realtimeEventListHead,
                              AURenderPullInputBlock        pullInputBlock ) {

        pullInputBlock(actionFlags, timestamp, frameCount, 0, renderABLCapture);

        Float32 *sampleDataInLeft = (Float32*) renderABLCapture->mBuffers[0].mData;
        Float32 *sampleDataInRight = (Float32*) renderABLCapture->mBuffers[1].mData;

        Float32 *sampleDataOutLeft  = (Float32*)outputBufferListPtr->mBuffers[0].mData;
        Float32 *sampleDataOutRight = (Float32*)outputBufferListPtr->mBuffers[1].mData;


        SuperpoweredAudiobufferlistElement inputBuffer;
        inputBuffer.samplePosition = 0;
        inputBuffer.startSample = 0;
        inputBuffer.samplesUsed = 0;
        inputBuffer.endSample = frameCount;
        inputBuffer.buffers[0] = SuperpoweredAudiobufferPool::getBuffer(frameCount * 8 + 64);
        inputBuffer.buffers[1] = inputBuffer.buffers[2] = inputBuffer.buffers[3] = NULL;

        SuperpoweredInterleave(sampleDataInLeft, sampleDataInRight, (Float32*)inputBuffer.buffers[0], frameCount);

        timeStretch->setRateAndPitchShift(1.0f, -2);
        timeStretch->setSampleRate(48000);
        timeStretch->process(&inputBuffer, outputBuffers);

        if (outputBuffers->makeSlice(0, outputBuffers->sampleLength)) {

            int numSamples = 0;
            int samplesOffset =0;

            while (true) {

                Float32 *timeStretchedAudio = (Float32 *)outputBuffers->nextSliceItem(&numSamples);
                if (!timeStretchedAudio) break;

                  SuperpoweredDeInterleave(timeStretchedAudio, sampleDataOutLeft + samplesOffset, sampleDataOutRight + samplesOffset, numSamples);

                samplesOffset += numSamples;

            };

            outputBuffers->clear();

        }

        return noErr;
    };

So, how can I create my Audio Unit render block, when my input and output buffers have the different count of samples (reverb, delay or time stretch)?


Solution

  • If your process creates more samples than provided by the audio callback input/output buffer size, you have to save those samples and play them later, by mixing in with subsequent output in a later audio unit callback if necessary.

    Often circular buffers are used to decouple input, processing, and output sample rates or buffer sizes.