I fine.tuned an SSD model to recognize a custom object. I followed the tutorials, ran the training process and exported the model, I tested it for inference and everything works great. So, now I have a structure like:
exported models/
|
---- SSD_custom_model/
|
--------checkpoint/
--------saved_model/
--------pipeline.config
which I assume is what is referred to as "Saved model" in the TensorFlow documentation. So, I wanted to convert this model to TensorFlow Lite to test in on an Android device, I checked the tutorials and I'm trying:
import tensorflow as tf
saved_model_dir = 'exported-models/SSD_custom_model/'
# # Convert the model
## I tried either just
# converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
## or, with more options
converter = tf.lite.TFLiteConverter.from_saved_model(
saved_model_dir, signature_keys=['serving_default'])
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
# Save the model.
with open('tflite/custom_model.tflite', 'wb') as f:
f.write(tflite_model)
And I'm getting the error
File "/home/lews/anaconda3/envs/tf/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("map/TensorArrayV2_1@__inference_call_func_11694" at "StatefulPartitionedCall@__inference_signature_wrapper_14068") at "StatefulPartitionedCall")): requires element_shape to be 1D tensor during TF Lite transformation pass
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("map/TensorArrayV2_1@__inference_call_func_11694" at "StatefulPartitionedCall@__inference_signature_wrapper_14068") at "StatefulPartitionedCall")): failed to legalize operation 'tf.TensorListReserve' that was explicitly marked illegal
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
It seems to be complaining about the input shape ('requires element_shape to be 1D tensor during TF Lite transformation pass'). Maybe I should've modified something about the model before the fine-tuning process? Or after that?
Hi,I'm doing the same work and encountered the same error, but I sovled it.
The model I converted is SSD-Mobile-v2, and I'm using tensorflow 2_4, so I believe this will work for you.
All you need to do is to create a new conda environment (python 3.8 is ok), and then install tf-nightly:
pip install tf-nightly
It's important to note that the version of tf-nightly must be >= 2.5.
At first I used the tf-nightly 2.3, I encountered another error. Then I upgrade it to 2.5, the converter finally works.