#newb
I'm using the MobileNet SSD v2 (COCO) object detection model to detect trains from a live camera feed.
My goal is to reduce the inference time.
The COCO dataset is available here - there's 3745 annotated images of trains.
I'm using this tutorial by Coral to retrain the mobilenet model.
So if I create a dataset using the train images from COCO and retrain this specific model, will the inference time be reduced? Or is creating a new model from scratch the only way?
Second @Shubham's answer, this won't make any difference in inference time, as long as you keep the input/output size the same. After the retraining the model, you'll have a fully quantized tflite model, as long as you follow the tutorial and compile it for the edgetpu, you can expects similar results to this benchmark.