Search code examples
videoffmpegcommand-linewebcamffplay

Problem with ffplay from webcam stream using complex filters


I'm trying to stream video from a webcam (at /dev/video2) through ffplay to scale and recolor it, add some text, and then reduce the number of colors with palettes. I don't get any errors, but running the ffplay command:

ffplay -i /dev/video2 -vf "hflip,\
  colorbalance=\
    rs=0.4:\
    bs=-0.4\
  ,\
  scale=\
    trunc(iw/8):\
    trunc(ih/8)\
  ,\
  drawtext=\
    text=\
      'efelbar':\
      fontcolor=white:\
      fontsize=10:\
      box=1:\
      boxcolor=black:\
      boxborderw=5:\
      x=(w-text_w)/2:\
      y=(h-text_h)/2\
  ,\
  split[s0][s1];\
  [s0]palettegen=\
    max_colors=16\
  [p];\
  [s1][p]paletteuse"

seems to stall, and fails to produce video output.

Running the simpler command ffplay -i /dev/video2 -vf "split[s0][s1];[s0]palettegen=max_colors=16[p];[s1][p]paletteuse", which takes a stream from a webcam and (should) reduce the number of colors, results in it just sitting there without showing the actual output stream. This might just be a performance issue because I'm on older hardware, but it doesn't give output relfective of that.

The output of that command is as follows:

ffplay version n5.0 Copyright (c) 2003-2022 the FFmpeg developers
  built with gcc 11.2.0 (GCC)
  configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-amf --enable-avisynth --enable-cuda-llvm --enable-lto --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libjack --enable-libmfx --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librav1e --enable-librsvg --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-libzimg --enable-nvdec --enable-nvenc --enable-shared --enable-version3
  libavutil      57. 17.100 / 57. 17.100
  libavcodec     59. 18.100 / 59. 18.100
  libavformat    59. 16.100 / 59. 16.100
  libavdevice    59.  4.100 / 59.  4.100
  libavfilter     8. 24.100 /  8. 24.100
  libswscale      6.  4.100 /  6.  4.100
  libswresample   4.  3.100 /  4.  3.100
  libpostproc    56.  3.100 / 56.  3.100
Input #0, video4linux2,v4l2, from '/dev/video2':B sq=    0B f=0/0   
  Duration: N/A, start: 254970.739108, bitrate: 147456 kb/s
  Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x480, 147456 kb/s, 30 fps, 30 tbr, 1000k tbn

I'm running this on a thinkpad t420s, so I definitely wouldn't be surprised if my laptop just can't process video that quickly. If that is the case, suggestions for optimizations would be great!


Solution

  • palettegen, by default, computes a global palette for the entire stream. For a live input, that can only happen at the end of the input. So paletteuse won't be fed any frames till then.

    You have to tell palettegen to generate a palette per frame.

    palettegen=max_colors=16:stats_mode=single