I have trained AutoML Vision with 80x80x3 image samples. The training have finished successfully and I downloaded the edge tflite model. When implementing the tflite model in python, according this tutorial by tensorflow I realised that the tflite models input size is 224x224x3.
My question is:
For better prediction performance I would like to process the new images exactly the way AutoML Vision has processed the images during training.
When feeding 80x80 images with input shape (1, 80, 80, 3) to the model, I get the exception "Cannot set tensor: Dimension mismatch", see code below.
Feeding 224x224 images works without exceptions. However, I would like to use images with 80x80x3, like I used for training. Or preprocess the 80x80x3 images like they were during training in AutoML Vision, for example by resizing them to 224x224x3 or however AutoML Vision has handled it.
test_sample.shape
Out: (80, 80, 3)
test_sample = test_sample.reshape(1, 80, 80, 3)
Out: (1, 80, 80, 3)
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=model_path)
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
print(interpreter.get_input_details())
Out: [{'name': 'image', 'index': 0, 'shape': array([ 1, 224, 224, 3], dtype=int32), 'dtype': <class 'numpy.uint8'>, 'quantization': (0.007874015718698502, 128)}]
output_details = interpreter.get_output_details()
# Test model on input data.
input_data = np.array(test_sample, dtype=np.uint8)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
Out: ValueError: Cannot set tensor: Dimension mismatch
ValueError Traceback (most recent call last)
in engine
----> 1 interpreter.set_tensor(input_details[0]['index'], input_data)
/home/cdsw/.local/lib/python3.6/site-packages/tensorflow/lite/python/interpreter.py in set_tensor(self, tensor_index, value)
173 ValueError: If the interpreter could not set the tensor.
174 """
--> 175 self._interpreter.SetTensor(tensor_index, value)
176
177 def resize_tensor_input(self, input_index, tensor_size):
/home/cdsw/.local/lib/python3.6/site-packages/tensorflow/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py in SetTensor(self, i, value)
134
135 def SetTensor(self, i, value):
--> 136 return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_SetTensor(self, i, value)
137
138 def GetTensor(self, i):
ValueError: Cannot set tensor: Dimension mismatch
It resizes the image for you. I observed this using netron to inspect the tflite and tensorflow models. Look for the input and follow the outputs past the decoder to the resize operation.