Search code examples
tensorflowraspberry-pitensorflow2.0tensorflow-lite

How to convert tf2 model so it will run on tflite interpreter


Background: I am trying to convert the tf2 model for SSD MobileNet V2 FPNLite 320x320 (for example) from the official tf zoo. The model should run eventually on raspberry pi, so I would like it to run on the tflite interpreter (without full tf). The docs imply that ssd model conversion is supported.

Whats happening: the process is detailed in this colab notebook. It is failing with the error:

ConverterError: <unknown>:0: error: loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference___call___23519" at "StatefulPartitionedCall@__inference_signature_wrapper_25508") at "StatefulPartitionedCall")): 'tf.Size' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
    tf.Size {device = ""}

if I add the flag tf.lite.OpsSet.SELECT_TF_OPS, it works but wont run on the rpi, as it does not have the ops.

Can this be done? Has anyone succeeded?


Solution

  • Since TF.Size is not natively supported on TFLite you can use TF Select mode which fallbacks to TF for the missing op, which during conversion is enabled using "SELECT_TF_OPS" that you tried. When you run inference you will need to use Interpreter which have Select ops linked. See the guide on running inference.