Search code examples
tensorflowgoogle-coral

Is it possible to quantize a Tensorflow Lite model to 8-bit weights without the original HDF5 file?


I'm trying to compile a tflite model with the edgetpu compiler to make it compatible with Google's Coral USB key, but when I run edgetpu_compiler the_model.tflite I get a Model not quantized error.

I then wanted to quantize the tflite model to an 8-bit integer format, but I don't have the model's original .h5 file.

Is it possible to quantize a tflite-converted model to an 8-bit format?


Solution

  • @garys unfortunately, tensorflow doesn't have an API to quantize a float tflite model. For post training quantization, the only API they have is for full tensorflow models (.pb, hdf5, h5, saved_model...) -> tflite. The quantization process happens during tflite conversion, so to my knowledge, there isn't a way to do this