I am trying to add bounding boxes to an image using onnxruntime and opencv to be detect objects with the yolov2 neural network. Instead, I get an error at runtime.
I converted the input image into a compatible tensor / numpy array to feed into the model. Once I knew everything worked perfectly without bugs, I added the following code to add the bounding boxes:
while True:
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
for x, y, w, h in pred_onnx:
cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), 2)
roiGray = gray[y:y+h, x:x+w]
roiColor = img[y:y+h, x:x+w]
cv2.imshow("Detect", cv2.resize(img, (500, 500)))
cv2.waitKey(0)
I was expecting the image to show (green) bounding boxes. Instead, I get this error:
File "C:\Users\MyName\Desktop\OnnxCV\onnxcv\object_detector.py", line 27, in <module>
for x, y, w, h in pred_onnx:
ValueError: not enough values to unpack (expected 4, got 1)
The full code is here if it helps.
The pred_onnx
array is not in the shape expected by the current code--there is some more postprocessing to do. See here for details about the output.
For example, using the 30% threshold suggested by the linked post, you would loop through and filter the bounding boxes like so:
for r in range(13):
for c in range(13):
confidence = pred_onnx[0, 4, r, c]
if confidence < 0.3:
continue
x = pred_onnx[0, 0, r, c]
y = pred_onnx[0, 1, r, c]
w = pred_onnx[0, 2, r, c]
h = pred_onnx[0, 3, r, c]
...