Search code examples
pythontensorflowtensorflow-litequantization

ValueError: No 'serving_default' in the SavedModel's SignatureDefs. Possible values are 'name_of_my_model'


I am trying to quantize a Tensorflow SavedModel using the following command line:

tflite_convert \
  --output_file=/tmp/foo.tflite \
  --saved_model_dir=/tmp/saved_model

But I get the following Error:

ValueError: No 'serving_default' in the SavedModel's SignatureDefs. Possible values are 'my model name'

I already checked, a signature_def_map was defined when exporting the model.

The command:

saved_model_cli show --dir /tmp/mobilenet/1 --tag_set serve

returns

The given SavedModel MetaGraphDef contains SignatureDefs with the following keys:
SignatureDef key: 'name_of_my_model'

and:

The given SavedModel SignatureDef contains the following input(s):
  inputs['is_training'] tensor_info:
      dtype: DT_BOOL
      shape: ()
      name: is_training:0
  inputs['question1_embedding'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 35, 300)
      name: question1_embedding:0
  inputs['question2_embedding'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 35, 300)
      name: question2_embedding:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['prediction'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 1)
      name: prediction:0
Method name is: tensorflow/serving/predict

Solution

  • You should be able to use the saved_model_signature_key to specify the signature name when converting

    tflite_convert \
             --output_file=/tmp/foo.tflite \
             --saved_model_dir=/tmp/saved_model \
             --saved_model_signature_key='my model name'