I used lite Converter to convert my model of pb format
to tflite format
in terminal but it didn't work well.
But when I used the tflite model
provided by speech command android demo
, it works pretty well. So I want to know how this model was converted?
https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/sequences/audio_recognition.md
Using the above link i trained the model with the below command
(base) unizen@admin:~/tensorflow/tensorflow/examples/speech_commands$ python train.py
When the model is saved after the training, I have created frozen model using the below code
(base) unizen@admin:~/tensorflow/tensorflow/examples/speech_commands$ python freeze.py \
--start_checkpoint=/tmp/speech_commands_train/conv.ckpt-18000 \
--output_file=/tmp/my_frozen_graph.pb
But when i tried converting .pb format
to tflite format
(base) unizen@admin:~/tensorflow/tensorflow/examples/speech_commands$ tflite_convert \
--saved_model_dir /home/unizen/Downloads/my_frozen_graph.pb \
--input_format TENSORFLOW_GRAPHDEF \
--input_arrays decoded_sample_data \
--input_shapes 16000,1 \
--output_arrays labels_softmax \
--output_format TFLITE \
--output_file /tmp/sprc.tflite \
--allow_custom_ops
the error is
(base) unizen@admin:~/tensorflow/tensorflow/examples/speech_commands$ python usage: tflite_convert [-h] --output_file OUTPUT_FILE
(--saved_model_dir SAVED_MODEL_DIR | --keras_model_file KERAS_MODEL_FILE)
tflite_convert: error: one of the arguments --saved_model_dir --keras_model_file is required.
kindly provide the solution for conversion of frozen model to tflite model
This
tflite_convert: error: one of the arguments --saved_model_dir --keras_model_file is required.
indicates, that you are using tensorflow >= 2.0.0.
Frozen graphs (.pb) are not used anymore since 2.0.0 and developers should save their models as "saved models" or keras models, thus the tflite_convert
command does not support it anymore.
But if you install for example tensorflow 1.15 you should be able to convert it like so:
tflite_convert
--output_file=/output.tflite
--graph_def_file /path/to/my_frozen_graph.pb \
--input_arrays decoded_sample_data,decoded_sample_data:1 \
--output_arrays labels_softmax \
--allow_custom_ops
Or if you don't want to install tensorflow 1.15 just do it with the python API and tf.compat.v1
:
import tensorflow as tf
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph("./conv_actions_frozen.pb", input_arrays=['decoded_sample_data', 'decoded_sample_data:1'], output_arrays=['labels_softmax'])
converter.allow_custom_ops=True
tflite_model = converter.convert()
open("output.tflite", "wb").write(model)