I am trying to view the output of an Omnivision OV7251 camera in OpenCV 4.2.0 Python 3.5.6. The camera output is 10 bit raw greyscale data, which I believe is right aligned in 16-bit words.
When I use this OpenCV code:
import cv2
cam2 = cv2.VideoCapture(0)
cam2.set(3, 640) # horizontal pixels
cam2.set(4, 480) # vertical pixels
while True:
b, frame = cam2.read()
if b:
cv2.imshow("Video", frame)
k = cv2.waitKey(5)
if k & 0xFF == 27:
cam2.release()
cv2.destroyAllWindows()
break
This is the image I get:
Presumably what's happening is that OpenCV is using the wrong process to convert from 10-bit raw to RGB, believing it to be some kind of YUV or something.
Is there some way I can either:
One way to do this is to grab the raw camera data, then use numpy to correct it:
import cv2
import numpy as np
cam2 = cv2.VideoCapture(0)
cam2.set(3, 640) # horizontal pixels
cam2.set(4, 480) # vertical pixels
cam2.set(cv2.CAP_PROP_CONVERT_RGB, False); # Request raw camera data
while True:
b, frame = cam2.read()
if b:
frame_16 = frame.view(dtype=np.int16) # reinterpret data as 16-bit pixels
frame_sh = np.right_shift(frame_16, 2) # Shift away the bottom 2 bits
frame_8 = frame_sh.astype(np.uint8) # Keep the top 8 bits
img = frame_8.reshape(480, 640) # Arrange them into a rectangle
cv2.imshow("Video", img)
k = cv2.waitKey(5)
if k & 0xFF == 27:
cam2.release()
cv2.destroyAllWindows()
break