I am trying to retrain custom object detector model for Coral USB and follow coral ai tutorials from these link; https://coral.ai/docs/edgetpu/retrain-detection/#requirements
After retrained ssd_mobilenet_v2 model, converting edge tpu models with edge tpu compiler. Compiler result are these ;
Operator | Count | Status |
---|---|---|
CUSTOM | 1 | Operation is working on an unsupported data type |
ADD | 10 | Mapped to Edge TPU |
LOGISTIC | 1 | Mapped to Edge TPU |
CONCATENATION | 2 | Mapped to Edge TPU |
RESHAPE | 13 | Mapped to Edge TPU |
CONV_2D | 55 | Mapped to Edge TPU |
DEPTHWISE_CONV_2D | 17 | Mapped to Edge TPU |
And visualize from netron ;
"Custom" operator not mapped. All operations are mapped and worked on tpu but "custom" is working on cpu. I saw same operator in ssd_mobilenet_v1
How i can convert all operators to edgetpu models? What is the custom operator ? ( you can find supported operators from here https://coral.ai/docs/edgetpu/models-intro/#supported-operations)
This is the correct output for a SSD model. The TFLite_Detection_PostProcess is the custom op that is not run on the EdgeTPU. If you run netron on one of our default SSD models on https://coral.ai/models/, you'll see the PostProcess runs on CPU in that case.
In the case of your model, every part of the of the model has been successfully converted. The last stage (which takes the model output and converts it to various usable outputs) is a custom implementation in TFLite that is already optimized for speed but is generic compute, not TFLite ops that the EdgeTPU accelerates.