I'm currently using the FFMpeg AutoGen project (in C# of course) to decode frames from an audio file and add them to a new stream that is being written to a video. This all works correctly, but I was wondering how one would go about mixing two AVFrame* objects together (after they are decoded).
I've mixed PCM data before but was wondering if FFmpeg had a built in API to do the work more effectively.
This is how I'm currently doing it:
short* baseFrameBuffer1 = (short*)baseFrame->data_0;
short* baseFrameBuffer2 = (short*)baseFrame->data_1;
short* readFrameBuffer1 = (short*)readFrame->data_0;
short* readFrameBuffer2 = (short*)readFrame->data_1;
for (int frameIndex = 0; frameIndex< 1024; frameIndex++)
{
int dataSample1 = GetInRangeSample(baseFrameBuffer1[frameIndex] + readFrameBuffer1[frameIndex]);
int dataSample2 = GetInRangeSample(baseFrameBuffer2[frameIndex] + readFrameBuffer2[frameIndex]);
baseFrame->data_0[frameIndex] = (byte)dataSample1;
baseFrame->data_1[frameIndex] = (byte)dataSample2;
}
private static int GetInRangeSample(int sample)
{
if (sample > short.MaxValue)
{
sample = short.MaxValue;
}
if (sample < short.MinValue)
{
sample = short.MinValue;
}
return sample;
}
You can use the amix filter in libavfilter.