I would like to add regularization of activations in Tensorflow.keras on a pretrained network, using a loop over layers.
If I want to regularize weights or biases, I can do:
l1=0.001
l2=0.001
for layer in model.layers:
if isinstance(layer, DepthwiseConv2D):
layer.add_loss(regularizers.l1_l2(l1,l2)(layer.depthwise_kernel))
elif isinstance(layer, layers.Conv2D) or isinstance(layer, layers.Dense):
layer.add_loss(regularizers.l1_l2(l1,l2)(layer.kernel))
if hasattr(layer, 'bias_regularizer') and layer.use_bias:
layer.add_loss(regularizers.l1_l2(l1,l2)(layer.bias))
As far as I understood and tested: this is working.
However I do not see clearly how to do this for activations regularization. Specifically, I want to add to the loss the OUTPUT of the Activation layer.
I guess I should do something like:
for layer in model.layers:
if isinstance(layer, Activation):
layer.add_loss(regularizers.l1_l2(l1,l2)(layer.XXX))
But it is not clear to me what should be replacing XXX in the above.
Thanks in advance for your help!
But it is not clear to me what should be replacing XXX in the above.
layer.output