I'm using a project when I'm recording video from the camera, but the audio comes from streaming. The audio frames obviously are not synchronised with video frames. If I use AVAssetWriter without video, recording audio frames from streaming it is working fine. But if I append video and audio frames, I can't hear anything.
Here it is the method for convert the audiodata from the stream to CMsampleBuffer
AudioStreamBasicDescription monoStreamFormat = [self getAudioDescription];
CMFormatDescriptionRef format = NULL;
OSStatus status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &monoStreamFormat, 0,NULL, 0, NULL, NULL, &format);
if (status != noErr) {
// really shouldn't happen
return nil;
}
CMSampleTimingInfo timing = { CMTimeMake(1, 44100.0), kCMTimeZero, kCMTimeInvalid };
CMSampleBufferRef sampleBuffer = NULL;
status = CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numSamples, 1, &timing, 0, NULL, &sampleBuffer);
if (status != noErr) {
// couldn't create the sample alguiebuffer
NSLog(@"Failed to create sample buffer");
CFRelease(format);
return nil;
}
// add the samples to the buffer
status = CMSampleBufferSetDataBufferFromAudioBufferList(sampleBuffer,
kCFAllocatorDefault,
kCFAllocatorDefault,
0,
samples);
if (status != noErr) {
NSLog(@"Failed to add samples to sample buffer");
CFRelease(sampleBuffer);
CFRelease(format);
return nil;
}
I don't know if this is related with the timing. But I would like to append the audio frames from the first second of the video.
is it that possible?
Thanks
Finally I did this
uint64_t _hostTimeToNSFactor = hostTime;
_hostTimeToNSFactor *= info.numer;
_hostTimeToNSFactor /= info.denom;
uint64_t timeNS = (uint64_t)(hostTime * _hostTimeToNSFactor);
CMTime presentationTime = self.initialiseTime;//CMTimeMake(timeNS, 1000000000);
CMSampleTimingInfo timing = { CMTimeMake(1, 44100), presentationTime, kCMTimeInvalid };