I am working on a project which requires the transfer of OpenCV frames using JSON, where the frames are used in a CascadeClassifier on the receiving end. I have encountered the error in the CascadeClassifier:
Kyles-MBP:facial_detection kyle$ python3 badcv.py
Traceback (most recent call last):
File "badcv.py", line 30, in <module>
doesnt()
File "badcv.py", line 26, in doesnt
faces = cascade.detectMultiScale(imgnew)
cv2.error: OpenCV(3.4.2) /Users/travis/build/skvark/opencv-python/opencv/modules/objdetect/src/cascadedetect.cpp:1376: error: (-215:Assertion failed) scaleFactor > 1 && _image.depth() == 0 in function 'detectMultiScale'
Kyles-MBP:facial_detection kyle$
I have distilled my code to isolate the error below. Clearly I am doing something wrong, but I do not have enough experience with OpenCV to know what to do here. I did some searching, and an image depth 0 corresponds to CV_8U
, however I have no idea how to set such a depth (I did some searching and concluded that it shouldn't matter, since cv2
natively represents images as ndarray
, but this could be a false assumption). Furthermore, I cannot identify any difference between the pre and post jsonified ndarray
; by my estimate, on all measures save for physical location in memory, the pre and post arrays are identical. I have included interpreter outputs below from investigating the data structures.
What am I doing wrong, and how do I avoid encountering this particular error? Thanks!
Code:
# badcv.py
import cv2
import json
import numpy as np
import os
import sys
cascade_file_path = os.path.dirname(
os.path.realpath(__file__)) + '/default.xml'
def works():
img = cv2.imread(sys.argv[1])
imgnew = img
rows, cols = imgnew.shape[:2]
cascade = cv2.CascadeClassifier(cascade_file_path)
faces = cascade.detectMultiScale(imgnew)
def doesnt():
img = cv2.imread(sys.argv[1])
data = { 'file': json.dumps(img.tolist()) }
imgnew = np.array(json.loads(data['file']))
if not (img is imgnew):
print("Not the same object")
rows, cols = imgnew.shape[:2]
cascade = cv2.CascadeClassifier(cascade_file_path)
faces = cascade.detectMultiScale(imgnew)
if __name__ == "__main__":
works()
doesnt()
As an addendum: The default.xml
file is the Haar xml classifier that ships with OpenCV and I am using a simple test file that is 10px X 20px, however this script still fails on all sizes of images, across jpg and png.
Equality:
I also checked for object equality, and the following assertion is valid:
>>> if (img == imgnew).all(): print("element-wise equality)
'element-wise equality'
However object level equality is not valid (which makes sense because json.loads
will return a new dictionary, not a cached one in memory):
>>> if not (img is imgnew): print("not the same object")
'not the same object'
The types of both img
and imgnew
are ndarray
with the same shape:
>>> if type(img) is type(imgnew): print("same type")
'same type'
>>> type(img)
<class 'numpy.ndarray'>
>>> if img.shape == imgnew.shape: print("same shape")
'same shape'
I did some searching, and an image depth 0 corresponds to CV_8U, however I have no idea how to set such a depth
You were on the right track there. This is the bit depth of the image, the data type of each pixel. img
will be loaded with a dtype of np.uint8
, an unsigned 8-Bit integer, which is the same as CV_8U.
When you pass through json, the pixel values become Python integers, and the created numpy array will have a np.int64
dtype.
Thus the issue:
>>> img.dtype == imgnew.dtype
False
Can be rectified with:
# Create an array with 8-Bit unsigned integers
imgnew_u8 = imgnew.astype(np.uint8)