I'm using the OpenCV library in Python to read live videoframes for the purpose of tracking multiple objects in each frame.
I do this using the VideoCapture method, with the code looking something like this:
vid = cv2.VideoCapture()
# Loop over all frames
while True:
ok, frame = vid.read()
if not ok:
break
# Quite heavy computations
So i get that every while loop, VideoCapture
calls the read()
method to process one frame. However, I was wondering what happens during the processing of this frame? My guess is that a number of frames are skipped during this processing. Is this true or are frames added to a buffer and do they eventually all get processed sequentially?
Even though VideoCapture
has a buffer to store images, in a heavy process your loop will skip some frames. By standard, your VideoCaptureProperties
has the property CAP_PROP_BUFFERSIZE = 38
, meaning it will store 38 frames. The read()
method uses grab()
method that reads the next frame from the buffer.
You can test it yourself, below is a simple example with a time delay to simulate a heavy process.
import numpy as np
import cv2
import time
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
# Introduce a delay to simulate heavy process
time.sleep(1)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
You will see the image skips frames (and does not create a "slow-motion" effect that we would expect in a slow sequence of images). Therefore, if your process is fast enough, you can match the camera FPS.