Search code examples
c++microcontrollertensorflow-lite

Cannot load TensorFlow Lite model on microcontroller


I'm trying to run a TensorFlow lite model on a microcontroller, namely on a Sparkfun Edge board, however I'm having some trouble loading the model to the device. Here are the steps I went through:

  1. Trained my own model in TensorFlow 2.1 using the tf.keras API
  2. Performed full integer quantization of weights and activations using the instructions provided on the TensorFlow website. I'm not sure why, but it seems like I wasn't able to get a model with INT8 inputs and outputs even though I followed the instructions provided. In fact, here's what my model looks like after quantization: input As you can see, my model's input type is still float32, which becomes int8 after flowing through the Quantize node. Similarly, I have a Dequantize node towards the output of the graph that does the exact opposite, namely it takes int8 values and converts them back to float32, as shown below: enter image description here Though this is not how it was supposed to be (i.e. int8 inputs and outputs without Quantize and Dequantize nodes), it's fine as long as I can get it to work
  3. Edited this file (as well as some other files actually, but this one is the most important one), which is part of an example application of an image classification model running on a Sparkfun Edge board and hosted in TensorFlow's GitHub repository (this application uses TensorFlow Lite's C++ API for use with microcontrollers). More specifically, I replaced the following code:
static tflite::MicroOpResolver<3> micro_op_resolver;
micro_op_resolver.AddBuiltin(
    tflite::BuiltinOperator_DEPTHWISE_CONV_2D,
    tflite::ops::micro::Register_DEPTHWISE_CONV_2D());
micro_op_resolver.AddBuiltin(tflite::BuiltinOperator_CONV_2D,
                             tflite::ops::micro::Register_CONV_2D());
micro_op_resolver.AddBuiltin(tflite::BuiltinOperator_AVERAGE_POOL_2D,
                             tflite::ops::micro::Register_AVERAGE_POOL_2D());

with this:

static tflite::MicroOpResolver<10> micro_op_resolver;

micro_op_resolver.AddBuiltin(
    tflite::BuiltinOperator_DEPTHWISE_CONV_2D,
    tflite::ops::micro::Register_DEPTHWISE_CONV_2D(),
    1,
    3
);

micro_op_resolver.AddBuiltin(
    tflite::BuiltinOperator_CONV_2D, 
    tflite::ops::micro::Register_CONV_2D(),
    1,
    3
);

micro_op_resolver.AddBuiltin(
    tflite::BuiltinOperator_AVERAGE_POOL_2D, 
    tflite::ops::micro::Register_AVERAGE_POOL_2D()
);

micro_op_resolver.AddBuiltin(
    tflite::BuiltinOperator_ADD, 
    tflite::ops::micro::Register_ADD()
);

micro_op_resolver.AddBuiltin(
    tflite::BuiltinOperator_SOFTMAX, 
    tflite::ops::micro::Register_SOFTMAX()
);

micro_op_resolver.AddBuiltin(
    tflite::BuiltinOperator_FULLY_CONNECTED, 
    tflite::ops::micro::Register_FULLY_CONNECTED()
);

micro_op_resolver.AddBuiltin(
    tflite::BuiltinOperator_QUANTIZE, 
    tflite::ops::micro::Register_QUANTIZE()
);

micro_op_resolver.AddBuiltin(
    tflite::BuiltinOperator_DEQUANTIZE, 
    tflite::ops::micro::Register_DEQUANTIZE(),
    1,
    2
);

micro_op_resolver.AddBuiltin(
    tflite::BuiltinOperator_RELU, 
    tflite::ops::micro::Register_RELU()
);

micro_op_resolver.AddBuiltin(
    tflite::BuiltinOperator_RELU6, 
    tflite::ops::micro::Register_RELU6()
);

which essentially registers all the different layers that are included in my custom model. Following the instructions provided for the Sparkfun Edge board, I managed to flash the application to the board, but when I run it, it outputs the following error:

Didn't find op for builtin opcode 'QUANTIZE' version '1'

Failed to get registration from op code  d

AllocateTensors() failed

I don't understand what I'm doing wrong, since the QUANTIZE operation gets registered with micro_op_resolver.AddBuiltin(...) (see the last code snippet)


Solution

  • I have the same error using the last version of Tensorflow (and tf-nightly). I already have opened an issue as well as another user on the Tensorflow's Github here and here.

    I also got the same error as you for 'QUANTIZE' when using 'MicroOpResolver'. You can try using 'AllOpsResolver', but you're likely to get the same problem as mentionned in the issues.

    static tflite::ops::micro::AllOpsResolver resolver;