I've got a simple webcam which I read out using OpenCV and I'm now trying to send this video footage to a different (Python) program using ZeroMQ. So I've got the following simple script to read out the webcam and send it using a ZeroMQ socket:
import cv2
import os
import zmq
import base64
context = zmq.Context()
footage_socket = context.socket(zmq.PUB)
footage_socket.connect('tcp://localhost:5555')
# init the camera
camera = cv2.VideoCapture(0)
while True:
try:
(grabbed, frame) = camera.read() # grab the current frame
frame = cv2.resize(frame, (640, 480)) # resize the frame
footage_socket.send_string(base64.b64encode(frame))
# Show the video in a window
cv2.imshow("Frame", frame) # show the frame to our screen
cv2.waitKey(1) # Display it at least one ms
# # before going to the next frame
except KeyboardInterrupt:
camera.release()
cv2.destroyAllWindows()
print "\n\nBye bye\n"
break
This works well in that it shows the video and doesn't give any errors.
I commented out the two lines which show the image (cv2.imshow()
and cv2.waitKey(1)
). I then started the script below in paralel. This second script should receive the video footage and show it.
import cv2
import zmq
import base64
import numpy as np
context = zmq.Context()
footage_socket = context.socket(zmq.SUB)
footage_socket.bind('tcp://*:5555')
footage_socket.setsockopt_string(zmq.SUBSCRIBE, unicode(''))
# camera = cv2.VideoCapture("output.avi")
while True:
try:
frame = footage_socket.recv_string()
frame = np.fromstring(base64.b64decode(frame), dtype=np.uint8)
cv2.imshow("Frame", frame) # show the frame to our screen
cv2.waitKey(1) # Display it at least one ms
# # before going to the next frame
except KeyboardInterrupt:
cv2.destroyAllWindows()
break
print "\n\nBye bye\n"
Unfortunately, this freezes on cv2.waitKey(1)
.
Does anybody know what I'm doing wrong here? Do I need to decode the footage differently? All tips are welcome!
In the end I solved the problem by taking intermediate steps. I first wrote individual images to disk, and then I read out those images again. That led me to the fact that I needed to encode the frame as an image (I opted for jpg), and with the magic methods cv2.imencode('.jpg', frame)
and cv2.imdecode(npimg, 1)
I could make it work. I pasted the full working code below.
This first script reads out the webcam and sends the footage over a zeromq socket:
import cv2
import zmq
import base64
context = zmq.Context()
footage_socket = context.socket(zmq.PUB)
footage_socket.connect('tcp://localhost:5555')
camera = cv2.VideoCapture(0) # init the camera
while True:
try:
(grabbed, frame) = camera.read() # grab the current frame
frame = cv2.resize(frame, (640, 480)) # resize the frame
encoded, buffer = cv2.imencode('.jpg', frame)
footage_socket.send_string(base64.b64encode(buffer))
except KeyboardInterrupt:
camera.release()
cv2.destroyAllWindows()
print "\n\nBye bye\n"
break
and this second script receives the frame images and displays them:
import cv2
import zmq
import base64
import numpy as np
context = zmq.Context()
footage_socket = context.socket(zmq.SUB)
footage_socket.bind('tcp://*:5555')
footage_socket.setsockopt_string(zmq.SUBSCRIBE, unicode(''))
while True:
try:
frame = footage_socket.recv_string()
img = base64.b64decode(frame)
npimg = np.fromstring(img, dtype=np.uint8)
source = cv2.imdecode(npimg, 1)
cv2.imshow("image", source)
cv2.waitKey(1)
except KeyboardInterrupt:
cv2.destroyAllWindows()
print "\n\nBye bye\n"
break