I'm going to make a video from series of screenshots (.png files). For each screenshot there is assosiated timestamp information about when it was taken. The time intervals between screenshots may vary and it's highly desired to preserve that difference in the video.
Is there a way to use single ffmpeg
command/API providing to it sequence of image + time (or frame) offset and getting one video file as output?
By now I have to generate short video files of custom length for each image, and then merge them manually:
ffmpeg -y -loop 1 -i image1.png -c:v libx264 -t 1.52 video1.avi
ffmpeg -y -loop 1 -i image2.png -c:v libx264 -t 2.28 video2.avi
...
ffmpeg -y -loop 1 -i imageN.png -c:v libx264 -t 1.04 videoN.avi
ffmpeg -i "concat:video1.avi|video2.avi|...videoN.avi" -c copy output.avi
This is quite ok, while intervals are large, but the whole approach seems to me a bit fragile.
Use the concat demuxer.
Example using 2, 4, and 0.25 seconds.
Make a text file indicating the desired duration per file:
file 'image1.png'
duration 1.52
file 'image2.png'
duration 2.28
file 'image3.png'
duration 1.04
You may have to repeat the last file
and duration
lines to get it to display the last frame.
Run the ffmpeg
command:
ffmpeg -f concat -i input.txt -c:v libx264 -pix_fmt yuv420p -movflags +faststart output.mp4