Search code examples
ioscore-audio

AudioFileWriteBytes performance for stereo file


I'm writing a stereo wave file with AudioFileWriteBytes (CoreAudio / iOS) and the only way I can get it to work is by calling it for each sample on each channel.

The following code works:

// Prepare the format AudioStreamBasicDescription;
AudioStreamBasicDescription asbd = {
    .mSampleRate       = session.samplerate,
    .mFormatID         = kAudioFormatLinearPCM,
    .mFormatFlags      = kAudioFormatFlagIsBigEndian| kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked,
    .mChannelsPerFrame = 2,
    .mBitsPerChannel   = 16,
    .mFramesPerPacket  = 1, // Always 1 for uncompressed formats
    .mBytesPerPacket   = 4, // 16 bits for 2 channels = 4 bytes
    .mBytesPerFrame    = 4  // 16 bits for 2 channels = 4 bytes
};

// Set up the file
AudioFileID audioFile;
OSStatus audioError = noErr;
audioError = AudioFileCreateWithURL((__bridge CFURLRef)fileURL, kAudioFileAIFFType, &asbd, kAudioFileFlags_EraseFile, &audioFile);
if (audioError != noErr) {
    NSLog(@"Error creating file");
    return;
}    

// Write samples
UInt64 currentFrame = 0;
while (currentFrame < totalLengthInFrames) {
    UInt64 numberOfFramesToWrite = totalLengthInFrames - currentFrame;
    if (numberOfFramesToWrite > 2048) {
        numberOfFramesToWrite = 2048;
    }

    UInt32 sampleByteCount = sizeof(int16_t);
    UInt32 bytesToWrite = (UInt32)numberOfFramesToWrite * sampleByteCount;
    int16_t *sampleBufferLeft = (int16_t *)malloc(bytesToWrite);
    int16_t *sampleBufferRight = (int16_t *)malloc(bytesToWrite);

    // Some magic to fill the buffers

    for (int j = 0; j < numberOfFramesToWrite; j++) {
        int16_t left  = CFSwapInt16HostToBig(sampleBufferLeft[j]);
        int16_t right = CFSwapInt16HostToBig(sampleBufferRight[j]);

        audioError = AudioFileWriteBytes(audioFile, false, (currentFrame + j) * 4, &sampleByteCount, &left);
        assert(audioError == noErr);
        audioError = AudioFileWriteBytes(audioFile, false, (currentFrame + j) * 4 + 2, &sampleByteCount, &right);
        assert(audioError == noErr);
    }

    free(sampleBufferLeft);
    free(sampleBufferRight);

    currentFrame += numberOfFramesToWrite;
}

However, it is (obviously) very slow and inefficient. I can't find anything on how to use it with a big buffer so that I can write more than a single sample while also writing 2 channels.

I tried making a buffer going LRLRLRLR (left / right), and then write that with just one AudioFileWriteBytes call. I expected that to work, but it produced a file filled with noise. This is the code:

UInt64 currentFrame = 0;
UInt64 bytePos = 0;
while (currentFrame < totalLengthInFrames) {
    UInt64 numberOfFramesToWrite = totalLengthInFrames - currentFrame;
    if (numberOfFramesToWrite > 2048) {
        numberOfFramesToWrite = 2048;
    }

    UInt32 sampleByteCount = sizeof(int16_t);
    UInt32 bytesInBuffer = (UInt32)numberOfFramesToWrite * sampleByteCount;
    UInt32 bytesInOutputBuffer = (UInt32)numberOfFramesToWrite * sampleByteCount * 2;
    int16_t *sampleBufferLeft = (int16_t *)malloc(bytesInBuffer);
    int16_t *sampleBufferRight = (int16_t *)malloc(bytesInBuffer);
    int16_t *outputBuffer = (int16_t *)malloc(bytesInOutputBuffer);

    // Some magic to fill the buffers

    for (int j = 0; j < numberOfFramesToWrite; j++) {
        int16_t left  = CFSwapInt16HostToBig(sampleBufferLeft[j]);
        int16_t right = CFSwapInt16HostToBig(sampleBufferRight[j]);

        outputBuffer[(j * 2)] = left;
        outputBuffer[(j * 2) + 1] = right;
    }

    audioError = AudioFileWriteBytes(audioFile, false, bytePos, &bytesInOutputBuffer, &outputBuffer);
    assert(audioError == noErr);

    free(sampleBufferLeft);
    free(sampleBufferRight);
    free(outputBuffer);

    bytePos += bytesInOutputBuffer;
    currentFrame += numberOfFramesToWrite;
}

I also tried to just write the buffers at once (2048*L, 2048*R, etc.) which I did not expect to work, and it didn't.

How do I speed this up AND get a working wave file?


Solution

  • I tried making a buffer going LRLRLRLR (left / right), and then write that with just one AudioFileWriteBytes call.

    This is the correct approach if using (the rather difficult) Audio File Services.

    If possible, instead of the very low level Audio File Services, use Extended Audio File Services. It is a wrapper around Audio File Services that has built in format converters. Or even better yet, use AVAudioFile it is a wrapper around Extended Audio File Services that covers most common use cases.

    If you are set on using Audio File Services, you'll have to interleave the audio manually like you had tried. Maybe show the code where you attempted this.