I'm trying to make some deep learning experiments on android on video samples. And I've got stuck into remuxing videos. I have a couple of questions to arrange information in my head:) I have read some pages: https://vec.io/posts/android-hardware-decoding-with-mediacodec and https://bigflake.com/mediacodec/#ExtractMpegFramesTest but still I have a mess.
My questions:
MediaExtractor
and then pass data to MediaMuxer
to save video in another file? Without using MediaCodec?Surface
? Just by modifying ByteBuffer
? I assume that I need to decode data from MediaExtractor
, then modify content, then encode it to MediaMuxer
.sample
is the same as frame
in context of method MediaExtractor::readSampleData
?This is a brief description of what each class does:
This is how you pipeline should generally look like:
MediaExtractor -> MediaCodec(As Decoder) -> Your editing -> MediaCodec(As Encoder) -> MediaMuxer
To answer you questions: