Search code examples
androidiostensorflowobject-detectiontensorflow-lite

Performance difference of Tensorflow lite in Android and iOS


I've trained a model to detect custom objects to be used in mobile devices (Android and iOS), my code is based in the tensorflow's examples for iOS and Android. During my tests I've been noticing a difference in performande on Android app and iOS app.

Some examples of performance (number of objects detected):

IMG - iOS - Android

img1 - 57 - 74

img2 - 9 - 33

img3 - 43 - 78

img4 - 17 - 25

I'm using a confidence thresh of 70% in both platforms. The real number of objects is a bit more than Android's result.

I did transfer learning using the ssd_mobilenet_v2_quantized_coco from the tensorflow model zoo and samples anotated by labelImg. The training process I did on google cloud following this tutorial.

My question is: What should I investigate to know the reason of the performance difference and fix it? My model should give the same result for the customer in both mobile platforms.

If it's something unclear please let me know, any help would be great. Thanks!


Solution

  • As far as could research, the problem is with the tensorflow example app. The Android version works fine, but the iOS version has something wrong with preprocessing logic. For floating-point models, the problem has been solved in this github issue some days ago, but for quantized models it's still not solved (my case). If someone is interested in contribute or be in touch with more details on this, chek out the issue I've opened on github.