I want to quantize a DenseNet model. I am using Tensorflow 2.4.
import tensorflow_model_optimization as tfmot
model = tf.keras.applications.DenseNet121(include_top=True,weights=None,input_tensor=None,input_shape=None,pooling=None,classes=1000)
quantize_model = tfmot.quantization.keras.quantize_model
model = quantize_model(model)
But I got the following message:
RuntimeError: Layer conv2_block1_0_bn:<class 'tensorflow.python.keras.layers.normalization_v2.BatchNormalization'> is not supported. You can quantize this layer by passing a tfmot.quantization.keras.QuantizeConfig
instance to the quantize_annotate_layer
API.
Is there a way how I can do this. I can not change the keras code.
In your case you need to quantize the layer BatchNormalization
seperately.
If you see the below example code snippet from this Quantization TF Guide, DefaultDenseQuantizeConfig
is used to handle this problem. Hope this guide helpy you solve this.
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class CustomLayer(tf.keras.layers.Dense):
pass
model = quantize_annotate_model(tf.keras.Sequential([
quantize_annotate_layer(CustomLayer(20, input_shape=(20,)), DefaultDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `DefaultDenseQuantizeConfig` with `quantize_scope`
# as well as the custom Keras layer.
with quantize_scope(
{'DefaultDenseQuantizeConfig': DefaultDenseQuantizeConfig,
'CustomLayer': CustomLayer}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()