Search code examples
pythonopencvgstreamernvidia-jetson-nano

How to use nvidia component to accelerate usb camera (v4l2src or nvv4l2camerasrc) on Jetson Nano with Python3 OpenCV


My goal is to accelerate the USB-CAMERA on JetsonNano with Python cv2 package. Trying to use some gstreaner plugin like nvv4l2decoder or nvjpegdec or (memory:NVMM).

Python Code:

import cv2

width = 1920
height = 1080

gs_pipeline = f"v4l2src device=/dev/video0 io-mode=2 " \
              f"! image/jpeg, width={width}, height={height}" \
              f"! nvv4l2decoder mjpeg=1 " \
              f"! nvvidconv " \
              f"! video/x-raw(memory:NVMM) format=BGR" \
              f"! videoconvert " \
              f"! video/x-raw, format=BGR " \
              f"! appsink"

v_cap = cv2.VideoCapture(gs_pipeline, cv2.CAP_GSTREAMER)
if not v_cap.isOpened():
    print("failed to open video capture")
    exit(-1)

while v_cap.isOpened():
    ret_val, frame = v_cap.read()
    if not ret_val:
        break

    cv2.imshow('', frame)

    input_key = cv2.waitKey(1)
    if input_key != -1:
        print(f"input key = {input_key}")

    if input_key == ord('q'):
        break

Error:

[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (711) open OpenCV | GStreamer warning: Error opening bin: could not parse caps "video/x-raw(memory:NVMM) format=BGR"
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created

Another point to mention, some of the gst-launch-1.0 command can work, but it does not mean it can work in python as well. The ultimate goal is to use python cv2 to control the camera.

Only Command Line Worked Example:

gst-launch-1.0 v4l2src device=/dev/video0 io-mode=2 ! image/jpeg, width=1920, height=1080, framerate=30/1, format=MJPG ! nvjpegdec ! 'video/x-raw(memory:NVMM),format=I420,width=1920,height=1080,framerate=30/1' ! nvegltransform ! nveglglessink

Solution

  • nvvidconv doesn't support BGR, only BGRx (thus the videoconvert for BGRx->BGR). Caps also lacks a comma. Last, videoconvert only supports system memory, so have nvvidconv to output into system memory rather than NVMM memory. So change:

    gs_pipeline = f"v4l2src device=/dev/video0 io-mode=2 " \
                  f"! image/jpeg, width={width}, height={height}, framerate=30/1, format=MJPG " \
                  f"! nvv4l2decoder mjpeg=1 " \
                  f"! nvvidconv " \
                  f"! video/x-raw, format=BGRx " \
                  f"! videoconvert " \
                  f"! video/x-raw, format=BGR " \
                  f"! appsink drop=1"