Here is the code generating the error
This is the structure of my model's base. I am import MobileNetV2 but I'm leaving out the top layers.
baseModel = MobileNetV2(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3)))
Then I am now creating a face detector with opencv. The face detector captures the face and the model predicts whether the face is wearing a face mask or not.
This line below is generating the error because I'm not resizing the captured image frame into the appropriate size of the model input.
for (x,y,w,h) in faces:
face_img = grayscale_img[y:y+w,x:x+w]
resized_img = cv2.resize(face_img,(56,56))
normalized_img = resized_img/255.0
reshaped_img = np.reshape(normalized_img,(224, 224, 3))
result=model.predict(reshaped_img)
The error generated is below
ValueError: cannot reshape array of size 3136 into shape (224,224,3)
What's the right way to reshape this img? Thank you
that error looks like the array may be only 1D, regardless there isn't enough pixels to make up a 224x224x3 image.
if the face image is grayscale then you wont be able to reshape it to 3 channels, you'll have to either copy it to the other 3 channels:
grayscale_img = grayscale_img[y:y+w,x:x+w]
face_img = np.stack([grayscale_img,grayscale_img,grayscale_img], axis= -1)## three channels
or convert the top of you model to a single channel, possibly (i am not certain) by using the following when you create the model:
input_tensor=Input(shape=(224, 224, 1)
you'd then resize the image width and height using either opencv or np:
resized_img = cv2.resize(face_img, (224,224)) ## opencv
MobileNetv2 needs to be normalised between (-1,1) therefore:
normalized_img = resized_img/128 -1
you might want to look into:
if your grayscale img is very small you may want to try to adjust the model to accept smaller images rather than enlarging them