Search code examples
imageffmpegjpegsequencevideo-processing

Why are image sequences larger (in size) than the source videos?


When I'm using a command like this in ffmpeg (or any other program):

ffmpeg -i input.mp4 image%d.jpg

The combined files size of all the images always tends to be larger than the video itself. I've tried reducing the frames per second, lower compression settings, blurs, and everything else I can find but the JPEGs always end up being larger in size (combined) afterwards.

I'm trying to understand why this happens, and what can be done to match the size of the video. Are there other compression formats I can use besides JPEG or any settings or tricks I'm overlooking?

Is it even possible?


Solution

  • To simplify, when the video is encoded, only certain images (keyframes) are encoded as full image such as your JPEG.

    The rest are encoded as a difference between the current image and the next image, which for most scenes is much less in size comparing to the whole image.