Search code examples
core-audio

Trying to setup an audio unit graph with a buffer of samples as the input


I am trying to implement a simple audio unit graph that goes:

buffer of samples->low pass filter->generic output

Where the generic output would be copied into a new buffer that could then be processed further, saved to disk, etc.

All of the examples I can find online having to do with setting up an audio unit graph involve using a generator with the kAudioUnitSubType_AudioFilePlayer as the input source... I am dealing with a buffer of samples already acquired, so those examples do not help... Based on looking around in the AudioUnitProperties.h file, it looks like I should be using using is kAudioUnitSubType_ScheduledSoundPlayer?

I can't seem to much documentation on how to hook this up, so I am quite stuck and am hoping someone here can help me out.

To simplify things, I just started out by trying to get my buffer of samples to go straight to the system output, but am unable to make this work...

  #import "EffectMachine.h"
  #import <AudioToolbox/AudioToolbox.h>
  #import "AudioHelpers.h"
  #import "Buffer.h"

  @interface EffectMachine ()
  @property (nonatomic, strong) Buffer *buffer;
  @end

  typedef struct EffectMachineGraph {
      AUGraph   graph;
      AudioUnit input;
      AudioUnit lowpass;
      AudioUnit output;
  } EffectMachineGraph;

  @implementation EffectMachine {
      EffectMachineGraph machine;
  }

  -(instancetype)initWithBuffer:(Buffer *)buffer {
      if (self = [super init]) {
          self.buffer = buffer;

          // buffer is a simple wrapper object that holds two properties:
          // a pointer to the array of samples (as doubles) and the size (number of samples)

      }
      return self;
  }

  -(void)process {
      struct EffectMachineGraph initialized = {0};
      machine = initialized;

      CheckError(NewAUGraph(&machine.graph),
                 "NewAUGraph failed");

      AudioComponentDescription outputCD = {0};
      outputCD.componentType = kAudioUnitType_Output;
      outputCD.componentSubType = kAudioUnitSubType_DefaultOutput;
      outputCD.componentManufacturer = kAudioUnitManufacturer_Apple;

      AUNode outputNode;
      CheckError(AUGraphAddNode(machine.graph,
                                &outputCD,
                                &outputNode),
                 "AUGraphAddNode[kAudioUnitSubType_GenericOutput] failed");

      AudioComponentDescription inputCD = {0};
      inputCD.componentType = kAudioUnitType_Generator;
      inputCD.componentSubType = kAudioUnitSubType_ScheduledSoundPlayer;
      inputCD.componentManufacturer = kAudioUnitManufacturer_Apple;

      AUNode inputNode;
      CheckError(AUGraphAddNode(machine.graph,
                                &inputCD,
                                &inputNode),
                 "AUGraphAddNode[kAudioUnitSubType_ScheduledSoundPlayer] failed");

      CheckError(AUGraphOpen(machine.graph),
                 "AUGraphOpen failed");

      CheckError(AUGraphNodeInfo(machine.graph,
                                 inputNode,
                                 NULL,
                                 &machine.input),
                 "AUGraphNodeInfo failed");

      CheckError(AUGraphConnectNodeInput(machine.graph,
                                         inputNode,
                                         0,
                                         outputNode,
                                         0),
                 "AUGraphConnectNodeInput");

      CheckError(AUGraphInitialize(machine.graph),
                 "AUGraphInitialize failed");

      // prepare input

      AudioBufferList ioData = {0};
      ioData.mNumberBuffers = 1;
      ioData.mBuffers[0].mNumberChannels = 1;
      ioData.mBuffers[0].mDataByteSize = (UInt32)(2 * self.buffer.size);
      ioData.mBuffers[0].mData = self.buffer.samples;

      ScheduledAudioSlice slice = {0};
      AudioTimeStamp timeStamp  = {0};

      slice.mTimeStamp    = timeStamp;
      slice.mNumberFrames = (UInt32)self.buffer.size;
      slice.mBufferList   = &ioData;

      CheckError(AudioUnitSetProperty(machine.input,
                                      kAudioUnitProperty_ScheduleAudioSlice,
                                      kAudioUnitScope_Global,
                                      0,
                                      &slice,
                                      sizeof(slice)),
                 "AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp] failed");

      AudioTimeStamp startTimeStamp = {0};
      startTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
      startTimeStamp.mSampleTime = -1;

      CheckError(AudioUnitSetProperty(machine.input,
                                      kAudioUnitProperty_ScheduleStartTimeStamp,
                                      kAudioUnitScope_Global,
                                      0,
                                      &startTimeStamp,
                                      sizeof(startTimeStamp)),
                 "AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp] failed");

      CheckError(AUGraphStart(machine.graph),
                 "AUGraphStart failed");

  //    AUGraphStop(machine.graph);  <-- commented out to make sure it wasn't stopping before actually finishing playing.
  //    AUGraphUninitialize(machine.graph);
  //    AUGraphClose(machine.graph);

  }

Does anyone know what I am doing wrong here?


Solution

  • I think this is the documentation you're looking for.

    To summarize: setup your augraph, setup your audio units & add them to the graph, write & attach a rendercallback function on the first node in your graph. Run the graph. Note that the rendercallback is where your app will be asked to provide buffers of samples to the augraph. This is where you'll need to read from your buffers and fill the buffers supplied by the rendercallback. I think this is what you're missing.

    If you're on iOS8, i recommend AVAudioEngine, which helps conceal some of the grungier boiler-platey details of graphs and effects

    Extras:

    1. Complete pre-iOS8 example code on github
    2. iOS Music player app that reads audio from your MP3 library into a circular buffer and then processes it via an augraph (using a mixer & eq AU). You can see how a rendercallback is setup to read from a buffer, etc.
    3. Amazing Audio Engine
    4. Novocaine Audio library