I created a custom activation function with keras, which reduce the channel size by half (max-feature map activation).
Here's what part of the code looks like :
import tensorflow as tf
import keras
from keras.utils.generic_utils import get_custom_objects
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D, Activation
def MyMFM (x):
Leng = int(x.shape[-1])
ind1=int(Leng/2)
X1=x[:,:,:,0:ind1]
X2=x[:,:,:,ind1:Leng]
MfmOut=tf.maximum(X1,X2)
return MfmOut
get_custom_objects().update({'MyMFM ': Activation(MyMFM)})
model = Sequential()
model.add(Conv2D(32, kernel_size=(5, 5),strides=(1, 1), padding = 'same',input_shape = (513,211,1)))
model.add(Activation(MyMFM))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(48, kernel_size=(1, 1),strides=(1, 1 ), padding = 'same'))
When I compile this code, I get the following error :
number of input channels does not match corresponding dimension of filter, 16 != 32
This error is from the last line of code. After activation, the channel length is reduced to 16 from 32. But the next layer automatically considers the channel length as 32 (No of filters in the first layer) not 16. I tried adding input_shape argument in the second convolution layer to define the input shape as (513,211,16). But that also gave me the same error. What should I do to pass the shape of the tensor to the next layer after activation?
Thank you
So - based on this documentation, you may see that keras
engine automatically sets the output shape from a layer to be the same as its input shape.
Use Lambda
layer instead.