I am working on neural networks with keras and tensorflow backend. Usually it is built up with convolutional and max pooling layers like it's done in vgg16 for example. For my neural network I would like to change max pooling layers to min pooling layers, but the layer should ignore zeros, when pooling.
For example:
[[0, 16], [72, 0]] the 2x2 pooling layer should pool 16 instead of 72 (max pooling).
Is there an easy way in keras to write this custom layer?
I guess that min pooling is possible via
min_x = -K.pool2d(-x, pool_size=(2, 2), strides=(2, 2))
Now it should ignore the zeros as minimums in addition. Thanks for any help!
One possible solution I found is the following one. Its kind of a work-around with minpooling, adding a high value to all the zeros before min pooling and after min pooling substracting that high value again. I am still looking for a better solution to solve this issue as I think this is not the best way especially regarding performance.
def min_pool2d(x):
max_val = K.max(x) + 1 # we gonna replace all zeros with that value
# replace all 0s with very high numbers
is_zero = max_val * K.cast(K.equal(x,0), dtype=K.floatx())
x = is_zero + x
# execute pooling with 0s being replaced by a high number
min_x = -K.pool2d(-x, pool_size=(2, 2), strides=(2, 2))
# depending on the value we either substract the zero replacement or not
is_result_zero = max_val * K.cast(K.equal(min_x, max_val), dtype=K.floatx())
min_x = min_x - is_result_zero
return min_x # concatenate on channel