Search code examples
pythongoogle-coral

How to run tflite on CPU only


I have a tflite modelthat runs in coral USB, but I it to run also in CPU (as an alternative to pass some tests when coral USB is not phisicaly available).

I found this very similar question but the answers given are not useful.

My code looks like this:

class CoralObjectDetector(object):

    def __init__(self, model_path: str, label_path: str):
        """
        CoralObjectDetector, this object allows to pre-process images and perform object detection.
        :param model_path: path to the .tflite file with the model
        :param label_path: path to the file with labels
        """

        self.label_path = label_path
        self.model_path = model_path

        self.labels = dict()  # type: Dict[int, str]

        self.load_labels()

        self.interpreter = tflite.Interpreter(model_path),
                                          experimental_delegates=[tflite.load_delegate('libedgetpu.so.1')])

        # more code and operations

Where model and labels are downloaded from here.

I would like to load an alternative version of the same model that let me execute without the coral USB accelerator (i.e. only in CPU). My goal is something as follows:

class CoralObjectDetector(object):

    def __init__(self, model_path: str, label_path: str, run_in_coral: bool):
        """
        CoralObjectDetector, this object allows to pre-process images and perform object detection.
        :param model_path: path to the .tflite file with the model
        :param label_path: path to the file with labels
        :param run_in_coral: whether or not to run it on coral (use CPU otherwise)
        """

        self.label_path = label_path
        self.model_path = model_path

        self.labels = dict()  # type: Dict[int, str]

        self.load_labels()

        if run_in_coral:

            self.interpreter = tflite.Interpreter(model_path),
                                              experimental_delegates=[tflite.load_delegate('libedgetpu.so.1')])

        else:
            # I expect somethig like this
            self.interpreter = tflite.CPUInterpreter(model_path)
        # more code and operations

I'm not sure if I need just this or something else in the inference/prediction methods.


Solution

  • When you compile a Coral model, it maps all the operations it can to a single TPU Custom OP - for example: Coral Model.

    This means that this model will only work on the TPU. That being said, your TFLite interpreter can run CPU models too (all we did was add the experimental delegate to handle that edgetpu-custom-op). To run the CPU version, simply pass the CPU version of the model (before it was compiled).

    For your object detection, if you use one of the models we provide in test_data, you'll see we provide the CPU and TPU version (for example for MNv1 SSD we have CPU and TPU versions). If you plugged these into any of our code, you'd see both work.

    I'd simply check to see if a Coral TPU is attached when picking which model you use.