The output of my previous layer has the shape (None, 30, 600). I want to multiply each row of this matrix by a different (600, 600) matrix or equivalently multiply this matrix by a 3D weight matrix. This can be achieved by applying a different dense layer to each row. I tried using a TimeDistributed Wrapper but that applies the same dense layer to each row. I have also tried using a lambda layer like so:
Lambda(lambda x: tf.stack(x, axis=1))(
Lambda(lambda x: [Dense(600)(each) for each in tf.unstack(x, axis=1)])(prev_layer_output)
)
This seemed to solve the problem and I was able to train the model correctly. But I noticed that model.summary() doesn't recognize these dense layers and neither are they reflected in the count of total Trainable params. Also, I am unable to restore their weights when I load the model and hence the whole training is wasted. How do I fix this problem? How do I apply a different dense layer to each row of a matrix?
You can use several layers instead of wrapping everything into a single Lambda
layer.
x = Input((30, 600))
unstacked = Lambda(lambda x: K.tf.unstack(x, axis=1))(x)
dense_outputs = [Dense(600)(x) for x in unstacked]
merged = Lambda(lambda x: K.stack(x, axis=1))(dense_outputs)
model = Model(x, merged)
Now you can see 30 Dense(600)
layers in model.summary()
.
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 30, 600) 0
__________________________________________________________________________________________________
lambda_1 (Lambda) [(None, 600), (None, 0 input_1[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 600) 360600 lambda_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 600) 360600 lambda_1[0][1]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 600) 360600 lambda_1[0][2]
__________________________________________________________________________________________________
dense_4 (Dense) (None, 600) 360600 lambda_1[0][3]
__________________________________________________________________________________________________
dense_5 (Dense) (None, 600) 360600 lambda_1[0][4]
__________________________________________________________________________________________________
dense_6 (Dense) (None, 600) 360600 lambda_1[0][5]
__________________________________________________________________________________________________
dense_7 (Dense) (None, 600) 360600 lambda_1[0][6]
__________________________________________________________________________________________________
dense_8 (Dense) (None, 600) 360600 lambda_1[0][7]
__________________________________________________________________________________________________
dense_9 (Dense) (None, 600) 360600 lambda_1[0][8]
__________________________________________________________________________________________________
dense_10 (Dense) (None, 600) 360600 lambda_1[0][9]
__________________________________________________________________________________________________
dense_11 (Dense) (None, 600) 360600 lambda_1[0][10]
__________________________________________________________________________________________________
dense_12 (Dense) (None, 600) 360600 lambda_1[0][11]
__________________________________________________________________________________________________
dense_13 (Dense) (None, 600) 360600 lambda_1[0][12]
__________________________________________________________________________________________________
dense_14 (Dense) (None, 600) 360600 lambda_1[0][13]
__________________________________________________________________________________________________
dense_15 (Dense) (None, 600) 360600 lambda_1[0][14]
__________________________________________________________________________________________________
dense_16 (Dense) (None, 600) 360600 lambda_1[0][15]
__________________________________________________________________________________________________
dense_17 (Dense) (None, 600) 360600 lambda_1[0][16]
__________________________________________________________________________________________________
dense_18 (Dense) (None, 600) 360600 lambda_1[0][17]
__________________________________________________________________________________________________
dense_19 (Dense) (None, 600) 360600 lambda_1[0][18]
__________________________________________________________________________________________________
dense_20 (Dense) (None, 600) 360600 lambda_1[0][19]
__________________________________________________________________________________________________
dense_21 (Dense) (None, 600) 360600 lambda_1[0][20]
__________________________________________________________________________________________________
dense_22 (Dense) (None, 600) 360600 lambda_1[0][21]
__________________________________________________________________________________________________
dense_23 (Dense) (None, 600) 360600 lambda_1[0][22]
__________________________________________________________________________________________________
dense_24 (Dense) (None, 600) 360600 lambda_1[0][23]
__________________________________________________________________________________________________
dense_25 (Dense) (None, 600) 360600 lambda_1[0][24]
__________________________________________________________________________________________________
dense_26 (Dense) (None, 600) 360600 lambda_1[0][25]
__________________________________________________________________________________________________
dense_27 (Dense) (None, 600) 360600 lambda_1[0][26]
__________________________________________________________________________________________________
dense_28 (Dense) (None, 600) 360600 lambda_1[0][27]
__________________________________________________________________________________________________
dense_29 (Dense) (None, 600) 360600 lambda_1[0][28]
__________________________________________________________________________________________________
dense_30 (Dense) (None, 600) 360600 lambda_1[0][29]
__________________________________________________________________________________________________
lambda_2 (Lambda) (None, 30, 600) 0 dense_1[0][0]
dense_2[0][0]
dense_3[0][0]
dense_4[0][0]
dense_5[0][0]
dense_6[0][0]
dense_7[0][0]
dense_8[0][0]
dense_9[0][0]
dense_10[0][0]
dense_11[0][0]
dense_12[0][0]
dense_13[0][0]
dense_14[0][0]
dense_15[0][0]
dense_16[0][0]
dense_17[0][0]
dense_18[0][0]
dense_19[0][0]
dense_20[0][0]
dense_21[0][0]
dense_22[0][0]
dense_23[0][0]
dense_24[0][0]
dense_25[0][0]
dense_26[0][0]
dense_27[0][0]
dense_28[0][0]
dense_29[0][0]
dense_30[0][0]
==================================================================================================
Total params: 10,818,000
Trainable params: 10,818,000
Non-trainable params: 0
__________________________________________________________________________________________________
EDIT: To verify that this model is learning:
model.compile(loss='mse', optimizer='adam')
w0 = model.get_weights()
model.fit(np.random.rand(100,30,600), np.random.rand(100,30,600), epochs=10)
You should be able to see that the loss is decreasing:
Epoch 1/10
100/100 [==============================] - 1s 15ms/step - loss: 0.4725
Epoch 2/10
100/100 [==============================] - 0s 1ms/step - loss: 0.2211
Epoch 3/10
100/100 [==============================] - 0s 1ms/step - loss: 0.2405
Epoch 4/10
100/100 [==============================] - 0s 1ms/step - loss: 0.2013
Epoch 5/10
100/100 [==============================] - 0s 1ms/step - loss: 0.1771
Epoch 6/10
100/100 [==============================] - 0s 1ms/step - loss: 0.1676
Epoch 7/10
100/100 [==============================] - 0s 1ms/step - loss: 0.1568
Epoch 8/10
100/100 [==============================] - 0s 1ms/step - loss: 0.1473
Epoch 9/10
100/100 [==============================] - 0s 1ms/step - loss: 0.1400
Epoch 10/10
100/100 [==============================] - 0s 1ms/step - loss: 0.1343
Also, you can verify that the weights indeed get updated by comparing the values before and after model fitting:
w0 = model.get_weights()
model.fit(np.random.rand(100,30,600), np.random.rand(100,30,600), epochs=10)
w1 = model.get_weights()
print(not any(np.allclose(x0, x1) for x0, x1 in zip(w0, w1)))
# => True