I use following codes to change the delegate on my phone(G3226)
try {
if(delegateNum == 1){
GpuDelegate delegate = new GpuDelegate();
Interpreter.Options options = (new Interpreter.Options()).addDelegate(delegate);
d.tfLite = new Interpreter(loadModelFile(assetManager, modelFilename), options);
}else if(delegateNum == 2){
NnApiDelegate delegate = new NnApiDelegate();
Interpreter.Options options = (new Interpreter.Options()).addDelegate(delegate);
d.tfLite = new Interpreter(loadModelFile(assetManager, modelFilename), options);
}else{
d.tfLite = new Interpreter(loadModelFile(assetManager, modelFilename));
}
} catch (Exception e) {
throw new RuntimeException(e);
}
But the performance are almost the same, not sure what happens.
TFLite versions
implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly' implementation 'org.tensorflow:tensorflow-lite-gpu:0.0.0-nightly' implementation 'org.tensorflow:tensorflow-lite-support:0.0.0-nightly'
Possible reasons I guess:
If it is 3, how could I check my phone support gpu or nnapi or not?Thanks
A few things:
The quantized SSD Model is probably out-of-date, please look at these ones for better accuracy. You will have to convert them with these instructions to get the .tflite versions though.
SSD models have a big post-processing step (NMS), which doesn't get accelerated. So the difference for SSD models is usually lesser than simpler ones like MobileNet used for classification.
NNAPI only works on Android 8.1 or later - is that true for your phone? Also, NNAPI may not accelerate on all architectures, so theres that case.
GPU delegate doesn't support quantized models (yet).