I have a use-case in which I am simulating IP camera using python and opencv.
I am playing video using opencv and sending the bytes of the frame to an application which is streaming it at a port 8080.
The problem is as soon as the video gets finished, I have nothing to send to the application which is streaming this fake simulated camera
at port 8080, so the application
takes it as timeout and stops working.
My question is, how can I send some fake bytes
lets days a black screen noise just to keep alive my application which is listening to my face simulated camera at 8080?
Edit 1: Adding code
app.py
from camera import VideoCamera
from flask import Flask, render_template, Response
import time
app = Flask(__name__)
@app.route('/')
def index():
return render_template('index.html')
def gen(camera):
while True:
try:
frame = camera.get_frame()
except Exception:
print("Video is finished or empty")
#return None
frame = camera.get_heartbeat()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')
@app.route('/video_feed')
def video_feed():
return Response(gen(VideoCamera()),
mimetype='multipart/x-mixed-replace; boundary=frame')
if __name__ == '__main__':
app.run(debug=True)
camera.py
import cv2
class VideoCamera(object):
def __init__(self):
# Using OpenCV to capture from device 0. If you have trouble capturing
# from a webcam, comment the line below out and use a video file
# instead.
#self.video = cv2.VideoCapture(0)
# If you decide to use video.mp4, you must have this file in the folder
# as the main.py.
# self.video = cv2.VideoCapture('suits_hd.mp4')
self.video = cv2.VideoCapture('nature.mp4')
def __del__(self):
self.video.release()
def get_frame(self):
success, image = self.video.read()
# We are using Motion JPEG, but OpenCV defaults to capture raw images,
# so we must encode it into JPEG in order to correctly display the
# video stream.
ret, jpeg = cv2.imencode('.jpg', image)
return jpeg.tobytes()
def get_heartbeat(self):
# jpeg = cv2.imread('noise-black.jpg')
image = cv2.imread('noise-green.jpg')
ret, jpeg = cv2.imencode('.jpg', image)
return jpeg.tobytes()
So, when you init the VideoCamera, get the width and height of the video frames in the file and remember them. Then, if self.video.read()
fails, just use numpy
to create a random array the same size as a video frame and imencode()
and send that.
Make a random green-ish frame with:
import numpy as np
# Make the Green channel out of intensities in range 200-255
G=np.random.randint(200,256,(320,240,1), dtype=np.uint8)
# Make Red and Blue channel out of intensities in range 0-49
X=np.random.randint(0,50,(320,240,1), dtype=np.uint8)
# Merge into a 3-channel image, use BGR order with OpenCV - although it won't matter because G is in the middle in both!
image=np.concatenate((X,G,X),axis=2)