Search code examples
tensorflowgoogle-coral

Convert Frozen graph for tfLite for Coral using tflite_convert


I'm using MobileNetV2 and trying to get it working for Google Coral. Everything seems to work except the Coral Web Compiler, throws a random error, Uncaught application failure. So I think the problem is the intemidary steps required. For example, I'm using this with tflite_convert

tflite_convert \
  --graph_def_file=optimized_graph.pb \
  --output_format=TFLITE \
  --output_file=mobilenet_v2_new.tflite \
  --inference_type=FLOAT \
  --inference_input_type=FLOAT \
  --input_arrays=input \
  --output_arrays=final_result \
  --input_shapes=1,224,224,3

What am I getting wrong?


Solution

  • This is most likely because your model is not quantized. Edge TPU devices do not currently support float-based model inference. For the best results, you should enable quantization during training (described in the link). However, you can also apply quantization during TensorFlow Lite conversion.

    With post-training quantization, you sacrifice accuracy but can test something out more quickly. When you convert your graph to TensorFlow Lite format, set inference_type to QUANTIZED_UINT8. You'll also need to apply the quantization parameters (mean/range/std_dev) on the command line as well.

    tflite_convert \
      --graph_def_file=optimized_graph.pb \
      --output_format=TFLITE \
      --output_file=mobilenet_v2_new.tflite \
      --inference_type=QUANTIZED_UINT8 \
      --input_arrays=input \
      --output_arrays=final_result \
      --input_shapes=1,224,224,3 \
      --mean_values=128 --std_dev_values=127 \
      --default_ranges_min=0 --default_ranges_max=255
    

    You can then pass the quantized .tflite file to the model compiler.

    For more details on the Edge TPU model requirements, check out TensorFlow models on the Edge TPU.