I've looked at lots of previous questions related to this, and none have helped.
My setup:
/dev/video0
and /dev/video1
For either one of the cameras I can capture images and display them at a pretty decent rate with minimal latency (and occasional artifacts).
When I try to use both, however, I get maybe a 10th the frame rate (although the delay between frames seems to vary wildly with each frame) with all sorts of nasty image artifacts (see below for example) and an intolerable amount of lag.
The problem does not seem to be the camera itself or USB bandwidth on the device: when I connect the cameras to my Windows PC, I am able to capture and display at 30 FPS without any visual artifacts and very little lag.
As best I can tell, it must be the Pi hardware, the drivers or OpenCV which is the problem. I don't think it's the Pi hardware.. I would be happy if I could achieve with two cameras half the frame rate I get with one camera (and I don't see why that shouldn't be possible) and no ugly artifacts.
Does anyone have any suggestions? I'm ultimately just trying to stream the video from the two cameras from my Pi to my desktop. If there are suggestions that don't involve OpenCV, I'm all ears; I am not trying to do any rendering or manipulation of the images on the Pi, but openCV is the only thing I've found that captures images even reasonably quickly (with one camera, of course).
Just for reference, the simple python script I'm using is this:
import cv2
import numpy as np
import socket
import ctypes
import struct
cap = []
cap.append(cv2.VideoCapture(0))
cap.append(cv2.VideoCapture(1))
#grab a single frame from one camera
def grab(num):
res, im = cap[num].read()
return (res,im)
#grab a frame from each camera and stitch them
#side by side
def grabSBS():
res, imLeft = grab(1)
#next line is for pretending I have 2 cameras
#imRight = imLeft.copy()
res, imRight = grab(0)
imSBS = np.concatenate((imLeft, imRight), axis=1)
return res,imSBS
###For displaying locally instead of streaming
#while(False):
# res, imLeft = grab(0)
# imRight = imLeft.copy()
# imSBS = np.concatenate((imLeft, imRight), axis=1)
# cv2.imshow("win", imSBS)
# cv2.waitKey(20)
header_data = ctypes.create_string_buffer(12)
while(True):
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sck.bind(("10.0.0.XXX", 12321))
sck.listen(1)
while(True):
(client, address) = sck.accept()
print "Client connected:", address
try:
while(True):
res,im = grabSBS()
if(res):
success, coded = cv2.imencode('.jpg', im)
if (success):
height, width, channels = im.shape
size = len(coded)
struct.pack_into(">i", header_data , 0, width)
struct.pack_into(">i", header_data , 4, height)
struct.pack_into(">i", header_data , 8, size)
client.sendall(header_data .raw)
client.sendall(coded.tobytes())
except Exception as ex:
print "ERROR:", ex
client.close()
sck.close()
exit()
UPDATE: I got it working much, much better by adding the following lines of code after initializing the VideoCapture objects:
cap[0].set(cv2.CAP_PROP_FPS, 15)
cap[1].set(cv2.CAP_PROP_FPS, 15)
This both lowers the bandwidth required and the openCV workload. I still get those horrible artifacts every few frames, so if anyone has advice on that I'm happy to hear it.
Well, after spending about 5 hours fighting with it, I seem to have found solutions.
First, apparently OpenCV was trying to capture at 30 FPS even though I wasn't able to pull frames at 30 FPS. I changed the VideoCapture frame rate to 15 FPS and the video became much, much smoother and faster.
cap[0].set(cv2.CAP_PROP_FPS, 15.0)
cap[1].set(cv2.CAP_PROP_FPS, 15.0)
That didn't get rid of the artifacts, though. I eventually found that if I do del(im)
after sending the image over the network, the artifacts completely went away.