I have an AudioInputIOProc
that I'm getting an AudioBufferList
from. I need to convert this AudioBufferList
to a CMSampleBufferRef
.
Here's the code I've written so far:
- (void)handleAudioSamples:(const AudioBufferList*)samples numSamples:(UInt32)numSamples hostTime:(UInt64)hostTime {
// Create a CMSampleBufferRef from the list of samples, which we'll own
AudioStreamBasicDescription monoStreamFormat;
memset(&monoStreamFormat, 0, sizeof(monoStreamFormat));
monoStreamFormat.mSampleRate = 44100;
monoStreamFormat.mFormatID = kAudioFormatMPEG4AAC;
monoStreamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved;
monoStreamFormat.mBytesPerPacket = 4;
monoStreamFormat.mFramesPerPacket = 1;
monoStreamFormat.mBytesPerFrame = 4;
monoStreamFormat.mChannelsPerFrame = 2;
monoStreamFormat.mBitsPerChannel = 16;
CMFormatDescriptionRef format = NULL;
OSStatus status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &monoStreamFormat, 0, NULL, 0, NULL, NULL, &format);
if (status != noErr) {
// really shouldn't happen
return;
}
mach_timebase_info_data_t tinfo;
mach_timebase_info(&tinfo);
UInt64 _hostTimeToNSFactor = (double)tinfo.numer / tinfo.denom;
uint64_t timeNS = (uint64_t)(hostTime * _hostTimeToNSFactor);
CMTime presentationTime = CMTimeMake(timeNS, 1000000000);
CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid };
CMSampleBufferRef sampleBuffer = NULL;
status = CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numSamples, 1, &timing, 0, NULL, &sampleBuffer);
if (status != noErr) {
// couldn't create the sample buffer
NSLog(@"Failed to create sample buffer");
CFRelease(format);
return;
}
// add the samples to the buffer
status = CMSampleBufferSetDataBufferFromAudioBufferList(sampleBuffer,
kCFAllocatorDefault,
kCFAllocatorDefault,
0,
samples);
if (status != noErr) {
NSLog(@"Failed to add samples to sample buffer");
CFRelease(sampleBuffer);
CFRelease(format);
NSLog(@"Error status code: %d", status);
return;
}
[self addAudioFrame:sampleBuffer];
NSLog(@"Original sample buf size: %ld for %d samples from %d buffers, first buffer has size %d", CMSampleBufferGetTotalSampleSize(sampleBuffer), numSamples, samples->mNumberBuffers, samples->mBuffers[0].mDataByteSize);
NSLog(@"Original sample buf has %ld samples", CMSampleBufferGetNumSamples(sampleBuffer));
}
Now, I'm unsure how to calculate the numSamples given this function definition of an AudioInputIOProc:
OSStatus AudioTee::InputIOProc(AudioDeviceID inDevice, const AudioTimeStamp *inNow, const AudioBufferList *inInputData, const AudioTimeStamp *inInputTime, AudioBufferList *outOutputData, const AudioTimeStamp *inOutputTime, void *inClientData)
This definition exists in the AudioTee.cpp file in WavTap.
The error I'm getting is a CMSampleBufferError_RequiredParameterMissing
error with the error code -12731
when I try to call CMSampleBufferSetDataBufferFromAudioBufferList
.
Update:
To clarify on the problem a bit, the following is the format of the audio data I'm getting from the AudioDeviceIOProc:
Channels: 2, Sample Rate: 44100, Precision: 32-bit, Sample Encoding: 32-bit Signed Integer PCM, Endian Type: little, Reverse Nibbles: no, Reverse Bits: no
I'm getting an AudioBufferList
* that has all the audio data (30 seconds of video) that I need to convert to a CMSampleBufferRef
* and add those sample buffers to a video (that is 30 seconds long) that is being written to disk via an AVAssetWriterInput
.
Three things look wrong:
You declare that the format ID is kAudioFormatMPEG4AAC
, but configure it as LPCM. So try
monoStreamFormat.mFormatID = kAudioFormatLinearPCM;
You also call the format "mono" when it's configured as stereo.
Why use mach_timebase_info
which could leave gaps in your audio presentation timestamps? Use sample count instead:
CMTime presentationTime = CMTimeMake(numSamplesProcessed, 44100);
Your CMSampleTimingInfo
looks wrong, and you're not using presentationTime
. You set the buffer's duration as 1 sample long when it can be numSamples
and its presentation time to zero which can't be right. Something like this would make more sense:
CMSampleTimingInfo timing = { CMTimeMake(numSamples, 44100), presentationTime, kCMTimeInvalid };
And some questions:
Does your AudioBufferList
have the expected 2 AudioBuffers
?
Do you have a runnable version of this?
p.s. I'm guilty of it myself, but allocating memory on the audio thread is considered harmful in audio dev.