Search code examples
deep-learningconv-neural-networkautoencoder

How to increase the number of Lambda layer in CNN autoencoder?


I am trying to customize a CNN Autoencoder like this. But I do not understand the meaning of Lambda layers. What Lambda(lambda x: x[:,0:1]) means? and how to add one more lambda layer (i.e., val3) in this case?

input_img = Input(shape=(384, 192, 2))
## Encoder
x = Conv2D(16, (3, 3), activation='tanh', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(4, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(4, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Reshape([6*3*4])(x) ## Flatten()
encoded = Dense(2,activation='tanh')(x)
## Two variables
val1= Lambda(lambda x: x[:,0:1])(encoded)
val2= Lambda(lambda x: x[:,1:2])(encoded)
## Decoder 1
.....

Solution

  • From this blog:

    Let's say that after the dense layer named dense_layer_3 we'd like to do some sort of operation on the tensor, such as adding the value 2 to each element. How can we do that? None of the existing layers does this, so we'll have to build a new layer ourselves.

    So Lambda layer is used to perform operations on the input tensor but is still recognized as a Layer. For example, let's say I have the model:

    layer1 = Dense(...)(x)
    layer2 = Dense(...)(x)
    
    model.summary() # will have layer1 and layer2
    

    Now I want to do x+2 after layer1. Normally I will do:

    layer1 = Dense(...)(x)
    x = x+2
    layer2 = Dense(...)(x)
    
    model.summary() # will miss the x = x+2 operation
    

    But x=x+2 will not be recognized as a Layer in the model. We know it exists because we do it, but others will not have a way to know, which makes it hard to debug if something goes wrong. So we use Lambda:

    layer1 = Dense(...)(x)
    lamb = Lambda(lambda x: x+2)(x) 
    layer2 = Dense(...)(x)
    
    model.summary() # will have Lambda layer inside it
    

    Regarding Lambda(lambda x: x[:,0:1]), it is a Lambda layer for tensor slicing. x[:, 0:1] means "take all rows, but only get columns that have index from 0 to 1.