Search code examples
objective-cstructcore-audio

Reading audio with Extended Audio File Services (ExtAudioFileRead)


I am working on understanding Core Audio, or rather: Extended Audio File Services

Here, I want to use ExtAudioFileRead() to read some audio data from a file.
This works fine as long as I use one single huge buffer to store my audio data (that is, one AudioBuffer). As soon as I use more than one AudioBuffer, ExtAudioFileRead() returns the error code -50 ("error in parameter list"). As far as I can figure out, this means that one of the arguments of ExtAudioFileRead() is wrong. Probably the audioBufferList.

I can not use one huge buffer because then, dataByteSize would overflow its UInt32-integer range with huge files.

Here is the code to create the audioBufferList:

AudioBufferList *audioBufferList;
audioBufferList = malloc(sizeof(AudioBufferList) + (numBuffers-1)*sizeof(AudioBuffer));
audioBufferList->mNumberBuffers = numBuffers;
for (int bufferIdx = 0; bufferIdx<numBuffers; bufferIdx++ ) {
    audioBufferList->mBuffers[bufferIdx].mNumberChannels = numChannels;
    audioBufferList->mBuffers[bufferIdx].mDataByteSize = dataByteSize;
    audioBufferList->mBuffers[bufferIdx].mData = malloc(dataByteSize);
}

And here is the working, but overflowing code:

UInt32 dataByteSize = fileLengthInFrames * bytesPerFrame; // this will overflow
AudioBufferList *audioBufferList = malloc(sizeof(audioBufferList));
audioBufferList->mNumberBuffers = 1;
audioBufferList->mBuffers[0].mNumberChannels = numChannels;
audioBufferList->mBuffers[0].mDataByteSize = dataByteSize;
audioBufferList->mBuffers[0].mData = malloc(dataByteSize);

And finally, the call of ExtAudioFileRead() (should work with both versions):

UInt32 numFrames = fileLengthInFrames;
error = ExtAudioFileRead(extAudioFileRef,
                         &numFrames,
                         audioBufferList);

Do you know what I am doing wrong here?


Solution

  • I think you're misunderstanding the purpose of the mNumberBuffers field. It's typically 1 for mono and interleaved stereo data. The only reason you would set it to something else is for multi-track data where each channel is in a separate data buffer.

    If you want to read a part of a file, you would set dataByteSize of the buffer to a reasonable size, and when you read the file, tell the API only to give you that many bytes, and loop over it.