I'm implementing an object detection algorithm, using OpenCV to read a live stream from my webcam, and I have a generic question about how the frames are read/stored in memory during the process.
The overall structure of the code is something like:
cap = cv2.VideoCapture(0)
while(True):
# Load frame from the camera
ret, frame = cap.read()
class_IDs, scores, bounding_boxes = neural_network(frame)
cv2.imshow('window', frame)
[...]
So, basically the code is continously performing this loop:
The "next frame" analyzed, however, is not the frame sequentially following the one that has just been processed, but it's the frame currently read from the camera live stream.
On the other hand, when reading from a video file, ALL the frames are read sequentially, so there is an increasing lag between the output of my program and the "normal" flow of the video.
How could I reproduce the camera behaviour when reading a file? In other words, when reading a file I want to:
I'm asking because I'll have to run the object detector on a virtual machine reading the video file from a remote webcam, and I'm afraid that if I just send the video stream to the virtual machine it will be treated as a video file and analyzed sequentially, while I want to reduce the lag between the object detection and the live stream.
Great question, as far as I believe when you run cap.read()
OpenCV captures the frame from the buffer at that point of instance and goes on executing the program.
So once you execute your pipeline the frame from the camera is captured only when cap.read() is executed.
If you want image frames to be processed sequentially with respect to the time frame you should try: