I'm trying to run a TensorFlow lite model on a microcontroller, namely on a Sparkfun Edge board, however I'm having some trouble loading the model to the device. Here are the steps I went through:
tf.keras
APIfloat32
, which becomes int8
after flowing through the Quantize
node. Similarly, I have a Dequantize
node towards the output of the graph that does the exact opposite, namely it takes int8
values and converts them back to float32
, as shown below:
Though this is not how it was supposed to be (i.e. int8
inputs and outputs without Quantize
and Dequantize
nodes), it's fine as long as I can get it to workstatic tflite::MicroOpResolver<3> micro_op_resolver;
micro_op_resolver.AddBuiltin(
tflite::BuiltinOperator_DEPTHWISE_CONV_2D,
tflite::ops::micro::Register_DEPTHWISE_CONV_2D());
micro_op_resolver.AddBuiltin(tflite::BuiltinOperator_CONV_2D,
tflite::ops::micro::Register_CONV_2D());
micro_op_resolver.AddBuiltin(tflite::BuiltinOperator_AVERAGE_POOL_2D,
tflite::ops::micro::Register_AVERAGE_POOL_2D());
with this:
static tflite::MicroOpResolver<10> micro_op_resolver;
micro_op_resolver.AddBuiltin(
tflite::BuiltinOperator_DEPTHWISE_CONV_2D,
tflite::ops::micro::Register_DEPTHWISE_CONV_2D(),
1,
3
);
micro_op_resolver.AddBuiltin(
tflite::BuiltinOperator_CONV_2D,
tflite::ops::micro::Register_CONV_2D(),
1,
3
);
micro_op_resolver.AddBuiltin(
tflite::BuiltinOperator_AVERAGE_POOL_2D,
tflite::ops::micro::Register_AVERAGE_POOL_2D()
);
micro_op_resolver.AddBuiltin(
tflite::BuiltinOperator_ADD,
tflite::ops::micro::Register_ADD()
);
micro_op_resolver.AddBuiltin(
tflite::BuiltinOperator_SOFTMAX,
tflite::ops::micro::Register_SOFTMAX()
);
micro_op_resolver.AddBuiltin(
tflite::BuiltinOperator_FULLY_CONNECTED,
tflite::ops::micro::Register_FULLY_CONNECTED()
);
micro_op_resolver.AddBuiltin(
tflite::BuiltinOperator_QUANTIZE,
tflite::ops::micro::Register_QUANTIZE()
);
micro_op_resolver.AddBuiltin(
tflite::BuiltinOperator_DEQUANTIZE,
tflite::ops::micro::Register_DEQUANTIZE(),
1,
2
);
micro_op_resolver.AddBuiltin(
tflite::BuiltinOperator_RELU,
tflite::ops::micro::Register_RELU()
);
micro_op_resolver.AddBuiltin(
tflite::BuiltinOperator_RELU6,
tflite::ops::micro::Register_RELU6()
);
which essentially registers all the different layers that are included in my custom model. Following the instructions provided for the Sparkfun Edge board, I managed to flash the application to the board, but when I run it, it outputs the following error:
Didn't find op for builtin opcode 'QUANTIZE' version '1'
Failed to get registration from op code d
AllocateTensors() failed
I don't understand what I'm doing wrong, since the QUANTIZE
operation gets registered with micro_op_resolver.AddBuiltin(...)
(see the last code snippet)
I have the same error using the last version of Tensorflow (and tf-nightly). I already have opened an issue as well as another user on the Tensorflow's Github here and here.
I also got the same error as you for 'QUANTIZE' when using 'MicroOpResolver'. You can try using 'AllOpsResolver', but you're likely to get the same problem as mentionned in the issues.
static tflite::ops::micro::AllOpsResolver resolver;