Search code examples
objective-cmacoscore-audio

Using ExtAudioFileWriteAsync() in callback function. Can't get to run


Just can't seem to get very far in Core Audio. My goal is to write captured audio data from an instrument unit to a file. I have set up a call to a callback function on an instrument unit with this:

CheckError(AudioUnitAddRenderNotify(player->instrumentUnit,
                                    MyRenderProc,
                                    &player),
           "AudioUnitAddRenderNotify Failed");

I set up the file and AudioStreamBasicDescription with this:

#define FILENAME @"output_IV.aif"

NSString *fileName = FILENAME; // [NSString stringWithFormat:FILENAME_FORMAT, hz];
NSString *filePath = [[[NSFileManager defaultManager] currentDirectoryPath] stringByAppendingPathComponent: fileName];
NSURL *fileURL = [NSURL fileURLWithPath: filePath];
NSLog (@"path: %@", fileURL);

AudioStreamBasicDescription asbd;
memset(&asbd, 0, sizeof(asbd));
asbd.mSampleRate = 44100.0;
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
asbd.mChannelsPerFrame = 2; // CHANGED FROM 1 (STEREO)
asbd.mFramesPerPacket = 1;
asbd.mBitsPerChannel = 16;
asbd.mBytesPerFrame = 4;
asbd.mBytesPerPacket = 4;

CheckError(ExtAudioFileCreateWithURL((__bridge CFURLRef)fileURL, kAudioFileAIFFType, &asbd, NULL, kAudioFileFlags_EraseFile, &testRecordFile), "ExtAudioFileCreateWithURL failed"); 
CheckError(ExtAudioFileSetProperty(testRecordFile, kExtAudioFileProperty_ClientDataFormat, (UInt32)sizeof(asbd), &asbd), "ExtAudioFileSetProperty failed"); 
CheckError(ExtAudioFileWriteAsync(testRecordFile, 0, NULL), "ExtAudioFileWriteAsync 1st time failed");

I verified that the file does get created. testRecordFile is defined globally (it's the only way I could get things to run at the moment):

ExtAudioFileRef testRecordFile;

My callback function is:

OSStatus MyRenderProc(void *inRefCon,
                  AudioUnitRenderActionFlags *ioActionFlags,
                  const AudioTimeStamp *inTimeStamp,
                  UInt32 inBusNumber,
                  UInt32 inNumberFrames,
                  AudioBufferList * ioData)
{
    if (*ioActionFlags & kAudioUnitRenderAction_PostRender){
    static int TEMP_kAudioUnitRenderAction_PostRenderError = (1 << 8);
        if (!(*ioActionFlags & TEMP_kAudioUnitRenderAction_PostRenderError)){
            CheckError(ExtAudioFileWriteAsync(testRecordFile, inNumberFrames, ioData), "ExtAudioFileWriteAsync failed");
        }
    }
return noErr;
}

When I run this the program pinwheels and goes into debugger mode (lldb) on the ExtAudioFileWriteAsync call. inNumberFrames = 512 and I have verified that I am getting stereo channels of Float32 audio data in ioData.

What am I missing here?


Solution

  • First, your code is still slightly complicated, and including some of "dark corners" of CoreAudio and Obj-C. It is a safer bet first making sure that everything works as intended in plain-C, on the real-time thread. As soon as you have debugged that part of code you can easily add as much Obj-C elegance as needed.

    If ignoring possible endianness and file format conversion issues for simplicity, there is still one issue you either have to resolve automatically, using API utilities, or "manually":

    AFAIK, data format for ExtAudioFileWriteAsync() must be interleaved, while the stream format for your AUGraph is not. Assuming we don't deal with endiannes and format conversion here, this is how you can fix it manually (single-channel example). In case your asbd stream format is non-interleaved stereo, you interleave data in your buffer like this: LRLRLRLRLR...

    OSStatus MyRenderProc(void *inRefCon, 
                      AudioUnitRenderActionFlags *ioActionFlags,
                      const AudioTimeStamp *inTimeStamp,
                      UInt32 inBusNumber,
                      UInt32 inNumberFrames,
                      AudioBufferList * ioData) 
    {
    AudioBufferList bufferList;
    Float32 samples[inNumberFrames+inNumberFrames];
    
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0].mData = samples;
    bufferList.mBuffers[0].mNumberChannels = 1;
    bufferList.mBuffers[0].mDataByteSize = (inNumberFrames+inNumberFrames)*sizeof(Float32);
    Float32 *data = (Float32 *)ioData->mBuffers[0].mData;
    
    if (*ioActionFlags & kAudioUnitRenderAction_PostRender){
        static int TEMP_kAudioUnitRenderAction_PostRenderError = (1 << 8);
        if (!(*ioActionFlags & TEMP_kAudioUnitRenderAction_PostRenderError) {  
            for(UInt32 i = 0; i < inNumberFrames; i++)
                samples[i+i] = samples [i+i+1] = data[i];//copy buffer[0] to L & R            
            ExtAudioFileWriteAsync(testRecordFile, inNumberFrames, &bufferList);
        }
    }
    return noErr;
    }
    

    This is just one example to show how it works. By studying asbd.mFormatFlags and setting the proper format in:

    ExtAudioFileSetProperty(testRecordFile,
                            kExtAudioFileProperty_ClientDataFormat,
                            s,
                            &asbd);
    

    you can achieve it more elegantly, but this exceeds the scope you this post by far.