Search code examples
pythonobjective-ccore-audioextaudiofile

Get correct FileLengthFrames with CoreAudio


I'm working on converting my Python code to Objective C to run on ios devices. The code about reading audio file. In Python I'm using AudioSegment to read file , The result is 2 separated channels in array.

For example:

Left channel  [-1,-2,-3,-4,-5,-6,-7,-8,-9,-10]  //length = 10
Right channel [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]   //length = 10

So the total length from python is 20

Here is how I get audio output in objective c

float *audioTotal = malloc(fileLengthInFrames * sizeof(float));
SInt16 *inputFrames = (SInt16*)bufferList->mBuffers[0].mData;
for(int i = 0; i < fileLengthInFrames; ++i) {
    audioTotal[i] = (float)inputFrames[i];
    printf("%f ", audioTotal[i]);
}

And output is :

[-1, 1, -2, 2, -3, 3, -4, 4, -5, 5] // length = 10

So the out put from objective c is mixed left and right channel. So I have to separate them by code:

if (clientFormat.mChannelsPerFrame > 1) {
        int indexLeft = 0;
        int indexRight = 0;
        float *leftAudio = malloc(fileLengthInFrames* sizeof(float));
        float *rightAudio = malloc(fileLengthInFrames * sizeof(float));
        for(int i = 0; i < fileLengthInFrames; i++) {
            if (i%2 == 0) {
                leftAudio[indexLeft] = audioTotal[i];
                printf("%f ", leftAudio[indexLeft]);
                indexLeft ++;
            } else {
                rightAudio[indexRight] = audioTotal[i];
                printf("%f ", rightAudio[indexRight]);
                indexRight ++;
            }
        }
}

And now I have 2 separated channel from objective c:

Left channel  [-1,-2,-3,-4,-5]  //length = 5
Right channel [ 1, 2, 3, 4, 5]   //length = 5

So the total length I got from objective c is 10 compare with 20 in python. Where is my rest of data? Did I miss some steps? Or wrong configuration? Thanks for help.


Solution

  • When you have interleaved samples and you "separate them by code", you're forgetting to multiply by channelsPerBuffer (which seems to be interleaved-savvy?), so for stereo you're missing out on half of the samples. Try changing the for loop to

    for(int i = 0; i < fileLengthInFrames*channelsPerBuffer; i++) {
        // display left and right samples here ...
    }
    

    The length of audioTotal should also be fileLengthInFrames*channelsPerBuffer.

    p.s. why recalculate fileLengthInFrames if client and file sample rates are the same?