I am using a trained model (frozen graph) in my Android app that utilises Tensorflow's premade estimator iris example as shown in this link:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/iris.py
I have modified iris.py to suit with my needs and added some statements to freeze the graph so that I have a .pb file to be put into my Android app's assets folder.
To use Tensorflow in my Android app, I have added the following line into my build.gradle (Module: app) file (the last statement in the dependencies block).
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'com.android.support:appcompat-v7:27.1.1'
implementation 'com.android.support.constraint:constraint-layout:1.1.2'
implementation 'no.nordicsemi.android.support.v18:scanner:1.0.0'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'com.android.support.test:runner:1.0.2'
androidTestImplementation 'com.android.support.test.espresso:espresso-
core:3.0.2'
implementation 'org.tensorflow:tensorflow-android:+'
}
With my frozen graph in place, I was testing out if Tensorflow is working my app by executing these statements:
//testing tensorflow feature
TensorFlowInferenceInterface tfInterface = new
TensorFlowInferenceInterface(
getAssets(), "estimator_frozen_graph.pb");
Graph graph = tfInterface.graph();
Toast.makeText(ScanActivity.this, "Tensorflow Graph Init Success",
Toast.LENGTH_SHORT).show();
int[] inputValues = {1, 1, 121, 800, 300};
long rowDim = 1;
long columnDim = 5;
tfInterface.feed("dnn/input_from_feature_columns/input_layer/concat:0",
inputValues, rowDim, columnDim);
String[] outputNames = {"dnn/logits/BiasAdd:0"};
boolean logstats = false;
tfInterface.run(outputNames, logstats);
float[] outputs = new float[6];
tfInterface.fetch("dnn/logits/BiasAdd:0", outputs);
for(int i = 0; i<= outputs.length; i++)
{
System.out.println(outputs[i]);
}
When the program reaches the line:
tfInterface.run(outputNames, logstats);
The following Error information appeared in Android Studio's logcat:
Caused by: java.lang.IllegalArgumentException: No OpKernel was registered to support Op 'Iterator' with these attrs. Registered devices: [CPU], Registered kernels:
<no registered kernels>
[[Node: Iterator = Iterator[container="", output_shapes=[[?], [?], [?], [?], [?], [?]], output_types=[DT_INT64, DT_INT64, DT_INT64, DT_INT64, DT_INT64, DT_INT64], shared_name=""]()]]
at org.tensorflow.Session.run(Native Method)
I have been searching for similar questions and problems but I cannot seem to find a viable solution to this problem.
Please tell me if I need to add any information to ease the process of getting assistance here. Thanks in advance.
I have solved my own problem. Turns out I have not understood the concept of freezing the model properly with Tensorflow. The short answer to this question is: