I want to create an object-detection app based on a retrained ssd_mobilenet model I've retrained like the guy on youtube.
I chose the model ssd_mobilenet_v2_coco
from the Tensorflow Model Zoo. After the retraining process I've got the model with the following structure:
- saved_model
- variables (empty folder)
- saved_model.pb
- checkpoint
- frozen_inverence_graph.pb
- model.ckpt.data-00000-of-00001
- model.ckpt.index
- model.ckpt.meta
- pipeline.config
In the same folder, I have the python script with the following code:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model("saved_model")
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
After running this code, I got the following error:
ValueError: None is only supported in the 1st dimension. Tensor 'image_tensor' has invalid shape '[None, None, None, 3]'.
It seems, that the image width and hight is missing in the model. When I use the model like in the youtube video, it is working.
After lots of research and attempts I tried other ways, like running bazel/toco, but nothing helped me to create a tflite-file.
As it describes in documentation, you can pass different parameters in tf.lite.TFLiteConverter.from_saved_model
.
For more complex SavedModels, the optional parameters that can be passed into
TFLiteConverter.from_saved_model()
areinput_arrays, input_shapes, output_arrays, tag_set and signature_key
. Details of each parameter are available by runninghelp(tf.lite.TFLiteConverter)
.
You can pass this information as described here. You need to provide input tensor name and its shape, and also output tensor name and its shape. And for ssd_mobilenet_v2_coco
, you need to define on which input shape you need to use the network like this:
tf.lite.TFLiteConverter.from_saved_model("saved_model", input_shapes={"image_tensor" : [1,300,300,3]})