Search code examples
pythontensorflowpython-imaging-librarygoogle-coral

How to use a custom TF.lite model with 2 classes on a Rasperry Pi with a Coral?


Two days ago, I created a custom model in Tflite from an image dataset. The accuracy is 97.4 % and it has only 2 classes (person, flower)

I converted the model to use it inside my Rasberry Pi with TPU Google Coral.

At the moment, I'm stuck with some problems. The documentation of Google Coral isn't really for me.

Language: Python3

Libraries

  • Keras
  • Tensorflow
  • Pillow
  • Picamera
  • Numpy
  • EdgeTPU-Engine

Project tree:

-------->model(sub-folder)

----------->model.tflite

----------->labels.txt

-------->video_detection.py

This is Python code: (Actually the code is from the documentation)

import argparse
import io
import time
import numpy as np
import picamera
import edgetpu.classification.engine
def main():
    parser = argparse.ArgumentParser()
    parser.add_argument(
      '--model', help='File path of Tflite model.', required=True)
    parser.add_argument(
      '--label', help='File path of label file.', required=True)
    args = parser.parse_args()
    with open(args.label, 'r', encoding="utf-8") as f:
        pairs = (l.strip().split(maxsplit=2) for l in f.readlines())
        labels = dict((int(k), v) for k, v in pairs)
    engine = edgetpu.classification.engine.ClassificationEngine(args.model)
    with picamera.PiCamera() as camera:
        camera.resolution = (640, 480)
        camera.framerate = 30
        _, width, height, channels = engine.get_input_tensor_shape()
        camera.start_preview()
        try:
            stream = io.BytesIO()
            for foo in camera.capture_continuous(stream,
                                                 format='rgb',
                                                 use_video_port=True,
                                                 resize=(width, height)):
                stream.truncate()
                stream.seek(0)
                input = np.frombuffer(stream.getvalue(), dtype=np.uint8)
                start_ms = time.time()
                results = engine.ClassifyWithInputTensor(input, top_k=1)
                elapsed_ms = time.time() - start_ms
                if results:
                    camera.annotate_text = "%s %.2f\n%.2fms" % (
                        labels[results[0][0]], results[0][1], elapsed_ms*1000.0)
        finally:
            camera.stop_preview()
if __name__ == '__main__':
    main()

How to run the script

python3 video_detection.py --model model/model.tflite --label model/labels.txt

Error

`Traceback (most recent call last):
  File "video_detection.py", line 41, in <module>
    main()
  File "video_detection.py", line 16, in main
    labels = dict((int(k), v) for k, v in pairs)
  File "video_detection.py", line 16, in <genexpr>
    labels = dict((int(k), v) for k, v in pairs)
ValueError: not enough values to unpack (expected 2, got 1)`

For me now it's very hard to integrate a custom model and use it with coral.

Documentation:

Thanks for reading, best regards

E.


Solution

  • The error is in labels.txt file:

    labels = dict((int(k), v) for k, v in pairs)
    ValueError: not enough values to unpack (expected 2, got 1)`
    

    Looks like you have some line(s) that have only one value instead of two