Good afternoon,
I am trying to implement a transformer network onto a DE10-nano board (2xCortex-A9, armv7-a), using tensorflow lite for microcontrollers (TFLM).
I trained the network using python and converted it to .tflite format. When doing so, I get a warning :
"TFLite interpreter needs to link Flex delegate in order to run the model since it contains the following Select TFop(s):
Flex ops: FlexEinsum"
And when I deploy the model on the board using an AllOpsResolver I get the error:
Failed to get registration from op code CUSTOM
When I inspect the operations that my network uses, flexEinsum is indeed part of the list:
=== TFLite ModelAnalyzer ===
Subgraph#0 main(T#0, T#1) -> [T#79]
Op#0 CAST(T#1) -> [T#21]
Op#1 GATHER(T#9, T#21) -> [T#22]
Op#2 MUL(T#22, T#18) -> [T#23]
Op#3 FlexEinsum(T#23, T#5) -> [T#24]
Op#4 ADD(T#24, T#3) -> [T#25]
Op#5 FlexEinsum(T#23, T#4) -> [T#26]
Op#6 ADD(T#26, T#3) -> [T#27]
Op#7 MUL(T#27, T#11) -> [T#28]
Op#8 FlexEinsum(T#25, T#28) -> [T#29]
Op#9 SOFTMAX(T#29) -> [T#30]
Op#10 FlexEinsum(T#23, T#2) -> [T#31]
Op#11 ADD(T#31, T#3) -> [T#32]
Op#12 FlexEinsum(T#30, T#32) -> [T#33]
Op#13 FlexEinsum(T#33, T#6) -> [T#34]
Op#14 ADD(T#34, T#7) -> [T#35]
...
**From my understanding, some operations are not yet supported by TFLM and I would need to directly use the einsum implemented in TF. My question is: how do I do that ? ** From the error sent by tensorflow when converting the model, I would need to 'link the flex delegate' but I don't understand what this means...
To give more context, I am using the Altera baremetal GCC toolchain on DS-5 to compile and deploy on the board. To include TFLM in my project, I generated the 'hello world' project and then used the generated 'tensorflow' and 'third_party' folders as a library in my project This works very well until flex ops show up...
Does anybody have solutions or ideas about this problem?
Have a great day!
Flex delegates are not available in TFLM: https://groups.google.com/a/tensorflow.org/g/micro/c/b4v-84f8J5Q. The thing to do is to modify the network to avoid using it.
Also, some ops are not compatible between TFLM and TFLite, follow this guide if you're in the case: https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/docs/porting_reference_ops.md