Search code examples
tensorflowconv-neural-networktensorflow-litenvidia-jetsongoogle-coral

Using my own built convolutional neural network classifier in Google Coral Devboard and Jetson Nano


I've been reading a lot about Jetson Nano and Google Coral Devboard and in most documentation and papers i've read, the inferencing and deployment are done using prebuilt convolutional neural networks such as AlexNet, Inception, MobileNet and other neural networks used for image classification. From what i understand these microcomputers require that the neural network is converted to a tensorflow model or any framework they accept to perform inferencing of the model.

What i would like to know is: for both Jetson Nano and Google Coral Devboard, can i have my own convolutional neural network that has nothing to do with those convolutional neural networks examplified in the documentation and deploy them to those boards?


Solution

  • Yes. You can train your own convolutional neural network, even outside Jetson Nano, and save the weights (matrices of floats) inside your Jetson Nano to do the inference. So, inside Jetson Nano, you will only do matrix multiplication to classify whatever you want. Of course, you'll have to duplicate your real model inside the device so that you can use the saved weights to do inference.