Search code examples
pythontensorflowtensorflow-litegoogle-coraledge-tpu

Converting SSD object detection model to TFLite and quantize it from float to uint8 for EdgeTPU


I am having problems converting a SSD object detection model into a uint8 TFLite for the EdgeTPU.

As far as I know, I have been searching in different forums, stack overflow threads and github issues and I think I am following the right steps. Something must be wrong on my jupyter notebook since I can't achive my proposal.

I am sharing with you my steps explained on a Jupyter Notebook. I think it will be more clear.

#!/usr/bin/env python
# coding: utf-8

Set-up

This step is to clone the repository. If you have done it once before, you can omit this step.

import os
import pathlib

# Clone the tensorflow models repository if it doesn't already exist
if "models" in pathlib.Path.cwd().parts:
  while "models" in pathlib.Path.cwd().parts:
    os.chdir('..')
elif not pathlib.Path('models').exists():
  !git clone --depth 1 https://github.com/tensorflow/models

Imports

Needed step: This is just for making the imports

import matplotlib
import matplotlib.pyplot as plt
import pathlib
import os
import random
import io
import imageio
import glob
import scipy.misc
import numpy as np
from six import BytesIO
from PIL import Image, ImageDraw, ImageFont
from IPython.display import display, Javascript
from IPython.display import Image as IPyImage

import tensorflow as tf
import tensorflow_datasets as tfds


from object_detection.utils import label_map_util
from object_detection.utils import config_util
from object_detection.utils import visualization_utils as viz_utils
#from object_detection.utils import colab_utils
from object_detection.utils import config_util
from object_detection.builders import model_builder

%matplotlib inline

Downloading a friendly model

For tflite is recommended to use SSD networks. I have downloaded the following model, it is about "object detection". It works with 320x320 images.
# Download the checkpoint and put it into models/research/object_detection/test_data/

!wget http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz
!tar -xf ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz
!if [ -d "models/research/object_detection/test_data/checkpoint" ]; then rm -Rf models/research/object_detection/test_data/checkpoint; fi
!mkdir models/research/object_detection/test_data/checkpoint
!mv ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/checkpoint models/research/object_detection/test_data/

List of the strings that is used to add correct label for each box.

PATH_TO_LABELS = '/home/jose/codeWorkspace-2.4.1/tf_2.4.1/models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)

Export and run with TFLite

Model conversion

On this step I convert the pb saved model to .tflite

!tflite_convert --saved_model_dir=/home/jose/codeWorkspace-2.4.1/tf_2.4.1/tflite/saved_model --output_file=/home/jose/codeWorkspace-2.4.1/tf_2.4.1/tflite/model.tflite

Model Quantization (From float to uint8)

Once the model is converted, I need to quantize it. The original model picks up a float as tensor input. As I want to run it on an Edge TPU I need the input and output tensors to be uint8.

Generating a calibration data set.

def representative_dataset_gen():
    folder = "/home/jose/codeWorkspace-2.4.1/tf_2.4.1/images_ssd_mb2_2"
    image_size = 320
    raw_test_data = []

    files = glob.glob(folder+'/*.jpeg')
    for file in files:
        image = Image.open(file)
        image = image.convert("RGB")
        image = image.resize((image_size, image_size))
        #Quantizing the image between -1,1;
        image = (2.0 / 255.0) * np.float32(image) - 1.0
        #image = np.asarray(image).astype(np.float32)
        image = image[np.newaxis,:,:,:]
        raw_test_data.append(image)

    for data in raw_test_data:
        yield [data]

(DO NOT RUN THIS ONE). It is the above step but with random values

If you don't have a dataset, you also can introduce random generated values, as if it was an image. This is the code I used to do so:
####THIS IS A RANDOM-GENERATED DATASET#### 
def representative_dataset_gen():
    for _ in range(320):
      data = np.random.rand(1, 320, 320, 3)
      yield [data.astype(np.float32)]

Call for model convert

converter = tf.lite.TFLiteConverter.from_saved_model('/home/jose/codeWorkspace-2.4.1/tf_2.4.1/tflite/saved_model')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8, tf.lite.OpsSet.SELECT_TF_OPS]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
converter.allow_custom_ops = True
converter.representative_dataset = representative_dataset_gen
tflite_model = converter.convert()

WARNINGS:

The conversion step returns a warning.

WARNING:absl:For model inputs containing unsupported operations which cannot be quantized, the inference_input_type attribute will default to the original type. WARNING:absl:For model outputs containing unsupported operations which cannot be quantized, the inference_output_type attribute will default to the original type.

This makes me think conversion is not correct.

Saving the model

with open('/home/jose/codeWorkspace-2.4.1/tf_2.4.1/tflite/model_full_integer_quant.tflite'.format('/home/jose/codeWorkspace-2.4.1/tf_2.4.1/tflite/saved_model'), 'wb') as w:
    w.write(tflite_model)
print("tflite convert complete! - {}/home/jose/codeWorkspace-2.4.1/tf_2.4.1/tflite/model_full_integer_quant.tflite".format('/home/jose/codeWorkspace-2.4.1/tf_2.4.1/tflite/saved_model'))

Tests

Test 1: Get TensorFlow version

I readed that it is recommended to use nightly for this. So in my case, version is 2.6.0

print(tf.version.VERSION)

Test 2: Get input/output tensor details

interpreter = tf.lite.Interpreter(model_path="/home/jose/codeWorkspace-2.4.1/tf_2.4.1/tflite/model_full_integer_quant.tflite")
interpreter.allocate_tensors()

print(interpreter.get_input_details())
print("@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@")
print(interpreter.get_output_details())

Test 2 Results:

I get the following info:

[{'name': 'serving_default_input:0', 'index': 0, 'shape': array([ 1, 320, 320, 3], dtype=int32), 'shape_signature': array([ 1, 320, 320, 3], dtype=int32), 'dtype': <class 'numpy.uint8'>, 'quantization': (0.007843137718737125, 127), 'quantization_parameters': {'scales': array([0.00784314], dtype=float32), 'zero_points': array([127], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}] @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

[{'name': 'StatefulPartitionedCall:31', 'index': 377, 'shape': array([ 1, 10, 4], dtype=int32), 'shape_signature': array([ 1, 10, 4], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'StatefulPartitionedCall:32', 'index': 378, 'shape': array([ 1, 10], dtype=int32), 'shape_signature': array([ 1, 10], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'StatefulPartitionedCall:33', 'index': 379, 'shape': array([ 1, 10], dtype=int32), 'shape_signature': array([ 1, 10], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'StatefulPartitionedCall:34', 'index': 380, 'shape': array([1], dtype=int32), 'shape_signature': array([1], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]

So, I think it is not quantizing it right

Converting the generated model to EdgeTPU

!edgetpu_compiler -s /home/jose/codeWorkspace-2.4.1/tf_2.4.1/tflite/model_full_integer_quant.tflite

jose@jose-VirtualBox:~/python-envs$ edgetpu_compiler -s /home/jose/codeWorkspace-2.4.1/tf_2.4.1/tflite/model_full_integer_quant.tflite Edge TPU Compiler version 15.0.340273435

Model compiled successfully in 1136 ms.

Input model: /home/jose/codeWorkspace-2.4.1/tf_2.4.1/tflite/model_full_integer_quant.tflite Input size: 3.70MiB Output model: model_full_integer_quant_edgetpu.tflite Output size: 4.21MiB On-chip memory used for caching model parameters: 3.42MiB On-chip memory remaining for caching model parameters: 4.31MiB Off-chip memory used for streaming uncached model parameters: 0.00B Number of Edge TPU subgraphs: 1 Total number of operations: 162 Operation log: model_full_integer_quant_edgetpu.log

Model successfully compiled but not all operations are supported by the Edge TPU. A percentage of the model will instead run on the CPU, which is slower. If possible, consider updating your model to use only operations supported by the Edge TPU. For details, visit g.co/coral/model-reqs. Number of operations that will run on Edge TPU: 112 Number of operations that will run on CPU: 50

Operator Count Status

LOGISTIC 1 Operation is otherwise supported, but not mapped due to some unspecified limitation DEPTHWISE_CONV_2D 14 More than one subgraph is not supported DEPTHWISE_CONV_2D 37 Mapped to Edge TPU QUANTIZE 1 Mapped to Edge TPU QUANTIZE 4 Operation is otherwise supported, but not mapped due to some unspecified limitation CONV_2D
58 Mapped to Edge TPU CONV_2D 14
More than one subgraph is not supported DEQUANTIZE
1 Operation is working on an unsupported data type DEQUANTIZE 1 Operation is otherwise supported, but not mapped due to some unspecified limitation CUSTOM 1
Operation is working on an unsupported data type ADD
2 More than one subgraph is not supported ADD
10 Mapped to Edge TPU CONCATENATION 1
Operation is otherwise supported, but not mapped due to some unspecified limitation CONCATENATION 1 More than one subgraph is not supported RESHAPE 2
Operation is otherwise supported, but not mapped due to some unspecified limitation RESHAPE 6
Mapped to Edge TPU RESHAPE 4 More than one subgraph is not supported PACK 4
Tensor has unsupported rank (up to 3 innermost dimensions mapped)

The jupyter notebook i prepared can be found on the following link: https://github.com/jagumiel/Artificial-Intelligence/blob/main/tensorflow-scripts/Step-by-step-explaining-problems.ipynb

Is there any step I am missing? Why is not resulting my conversion?

Thank you very much in advance.


Solution

  • The process, as @JaesungChung answered is well done.

    My problem was on the application which was running the .tflite model. I quantized my model output to uint8, so I had to reescale my obtained values to get the right results.

    I.e. I had 10 objects because I was requesting all the detected objects with an score above 0.5. My results were no scaled, so the detected objects scores could be perfectly 104. I had to reescale that number dividing by 255.

    The same happened when graphing my results. So I had to divide that number and multiplicate by the height and width.