I am creating a car-interior classifier, and I use teachable machine to generate a keras.h5 model, which I can use with python. Whilst on the teachablemachine website, there is a preview window that uses your webcam to classify images, and I help up another laptop in front of my webcam, with images of cars for testing. It got them all right. I also tested with the upload feature, which did even better. Then, I downloaded the model and tested it in python, and with a bunch of images that WORKED on the teachablemachine website, it got most of them wrong, even though a lot of them were included in the training data. Does anyone know where this inaccuracy comes from? I thought it might be because I am resizing the images, whereas the website cropped them, in which case, how would I crop them. However, I am not certain this is the case. Does anyone know the reason for this sudden inaccuracy?
Python code for classifier:
from keras.models import load_model
import numpy as np
import os, cv2, time
import urllib.request
from PIL import Image
np.set_printoptions(suppress=True)
model = load_model("350EpochCarInteriorModel.h5", compile=False)
class_names = open("labels.txt", "r").readlines()
image_path = "\\testing\\"
for imageName in os.listdir(image_path):
image = cv2.imread(image_path + imageName)
image = cv2.resize(image, (224, 224), interpolation=cv2.INTER_AREA)
image = np.asarray(image, dtype=np.float32).reshape(1, 224, 224, 3)
image = (image / 127.5) - 1
prediction = model.predict(image)
index = np.argmax(prediction)
class_name = class_names[index][2:]
confidence_score = prediction[0][index]
car_brand = class_name.split("^")[0]
car_model = class_name.split("^")[1]
print("Car: %s, Confidence: %s, Name: %s" % (class_name, confidence_score, car_brand + " " + car_model))
cv2.waitKey(100)
time.sleep(20)
cv2.destroyAllWindows()
I was facing the same problem. I fixed it by converting an image from BGR (default for opencv) to RGB and it worked for me. Try adding:
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)