Search code examples
androidaudiovideoandroid-mediacodecmediamuxer

audio and video track synchronization issue when using mediacodec and mediamuxer for mp4 files


I would like to produce mp4 file by multiplexing audio from mic (overwrite didGetAudioData) and video from camera (overwrite onpreviewframe).However, I encountered the sound and video synchronization problem, video will appear faster than audio. I wondered if the problem related to incompatible configurations or presentationTimeUs, could someone guide me how to fix the problem. Below were my software.

Video configuration

formatVideo = MediaFormat.createVideoFormat(MIME_TYPE_VIDEO, 640, 360);
formatVideo.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar);
formatVideo.setInteger(MediaFormat.KEY_BIT_RATE, 2000000);
formatVideo.setInteger(MediaFormat.KEY_FRAME_RATE, 30);
formatVideo.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);

got video presentationPTS as below,

if(generateIndex == 0) {
    videoAbsolutePtsUs = 132;
    StartVideoAbsolutePtsUs = System.nanoTime() / 1000L;
}else {
    CurrentVideoAbsolutePtsUs = System.nanoTime() / 1000L;
    videoAbsolutePtsUs =132+ CurrentVideoAbsolutePtsUs-StartVideoAbsolutePtsUs;
}
generateIndex++;

audio configuration

format = MediaFormat.createAudioFormat(MIME_TYPE, 48000/*sample rate*/, AudioFormat.CHANNEL_IN_MONO /*Channel config*/);
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
format.setInteger(MediaFormat.KEY_SAMPLE_RATE,48000);
format.setInteger(MediaFormat.KEY_CHANNEL_COUNT,1);
format.setInteger(MediaFormat.KEY_BIT_RATE,64000);

got audio presentationPTS as below,

if(generateIndex == 0) {
   audioAbsolutePtsUs = 132;
   StartAudioAbsolutePtsUs = System.nanoTime() / 1000L;
}else {
   CurrentAudioAbsolutePtsUs = System.nanoTime() / 1000L;
   audioAbsolutePtsUs =CurrentAudioAbsolutePtsUs - StartAudioAbsolutePtsUs;
}

generateIndex++;
audioAbsolutePtsUs = getJitterFreePTS(audioAbsolutePtsUs, audioInputLength / 2);

long startPTS = 0;
long totalSamplesNum = 0;
private long getJitterFreePTS(long bufferPts, long bufferSamplesNum) {
    long correctedPts = 0;
    long bufferDuration = (1000000 * bufferSamplesNum) / 48000;
    bufferPts -= bufferDuration; // accounts for the delay of acquiring the audio buffer
    if (totalSamplesNum == 0) {
        // reset
        startPTS = bufferPts;
        totalSamplesNum = 0;
    }
    correctedPts = startPTS +  (1000000 * totalSamplesNum) / 48000;
    if(bufferPts - correctedPts >= 2*bufferDuration) {
        // reset
        startPTS = bufferPts;
        totalSamplesNum = 0;
        correctedPts = startPTS;
    }
    totalSamplesNum += bufferSamplesNum;
    return correctedPts;
}

Was my issue caused by applying jitter function for audio only? If yes, how could I apply jitter function for video? I also tried to find correct audio and video presentationPTS by https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java. But encodedecodeTest only provided video PTS. That's the reason my implementation used system nanotime for both audio and video. If I want to use video presentationPTS in encodedecodetest, how to construct the compatible audio presentationPTS? Thanks for help!

below are how i queue yuv frame to video mediacodec for reference. For audio part, it is identical except for different presentationPTS.

int videoInputBufferIndex;
int videoInputLength;
long videoAbsolutePtsUs;
long StartVideoAbsolutePtsUs, CurrentVideoAbsolutePtsUs;

int put_v =0;
int get_v =0;
int generateIndex = 0;

public void setByteBufferVideo(byte[] buffer, boolean isUsingFrontCamera, boolean Input_endOfStream){
    if(Build.VERSION.SDK_INT >=18){
        try{

            endOfStream = Input_endOfStream;
            if(!Input_endOfStream){
            ByteBuffer[] inputBuffers = mVideoCodec.getInputBuffers();
            videoInputBufferIndex = mVideoCodec.dequeueInputBuffer(-1);

                if (VERBOSE) {
                    Log.w(TAG,"[put_v]:"+(put_v)+"; videoInputBufferIndex = "+videoInputBufferIndex+"; endOfStream = "+endOfStream);
                }

                if(videoInputBufferIndex>=0) {
                    ByteBuffer inputBuffer = inputBuffers[videoInputBufferIndex];
                    inputBuffer.clear();

                    inputBuffer.put(mNV21Convertor.convert(buffer));
                    videoInputLength = buffer.length;

                    if(generateIndex == 0) {
                        videoAbsolutePtsUs = 132;
                        StartVideoAbsolutePtsUs = System.nanoTime() / 1000L;
                    }else {
                        CurrentVideoAbsolutePtsUs = System.nanoTime() / 1000L;
                        videoAbsolutePtsUs =132+ CurrentVideoAbsolutePtsUs - StartVideoAbsolutePtsUs;
                    }

                    generateIndex++;

                    if (VERBOSE) {
                        Log.w(TAG, "[put_v]:"+(put_v)+"; videoAbsolutePtsUs = " + videoAbsolutePtsUs + "; CurrentVideoAbsolutePtsUs = "+CurrentVideoAbsolutePtsUs);
                    }

                    if (videoInputLength == AudioRecord.ERROR_INVALID_OPERATION) {
                        Log.w(TAG, "[put_v]ERROR_INVALID_OPERATION");
                    } else if (videoInputLength == AudioRecord.ERROR_BAD_VALUE) {
                        Log.w(TAG, "[put_v]ERROR_ERROR_BAD_VALUE");
                    }
                    if (endOfStream) {
                        Log.w(TAG, "[put_v]:"+(put_v++)+"; [get] receive endOfStream");
                        mVideoCodec.queueInputBuffer(videoInputBufferIndex, 0, videoInputLength, videoAbsolutePtsUs, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
                    } else {
                        Log.w(TAG, "[put_v]:"+(put_v++)+"; receive videoInputLength :" + videoInputLength);
                        mVideoCodec.queueInputBuffer(videoInputBufferIndex, 0, videoInputLength, videoAbsolutePtsUs, 0);
                    }
                }
            }
        }catch (Exception x) {
            x.printStackTrace();
        }
    }
}

Solution

  • How I solved this in my application was by setting the PTS of all video and audio frames against a shared "sync clock" (note the sync also means it's thread-safe) that starts when the first video frame (having a PTS 0 on its own) is available. So if audio recording starts sooner than video, audio data is dismissed (doesn't go into encoder) until video starts, and if it starts later, then the first audio PTS will be relative to the start of the entire video.

    Ofcourse you are free to allow audio to start first, but players will usually skip or wait for the first video frame anyway. Also be careful that encoded audio frames will arrive "out of order" and MediaMuxer will fail with an error sooner or later. My solution was to queue them all like this: sort them by pts when a new one comes in, then write everything that is older than 500 ms (relative to the newest one) to MediaMuxer, but only those with a PTS higher than the latest written frame. Ideally this means data is smoothly written to MediaMuxer, with a 500 ms delay. Worst case, you will lose a few audio frames.