Model architecture
model = Sequential()
model.add(LSTM(50,batch_input_shape(50,10,9),return_sequences=True))
model.add(LSTM(30,return_sequences=True, activation='tanh'))
model.add(LSTM(20,return_sequences=False, activation='tanh'))
model.add(Dense(9, activation='tanh'))
model.compile(loss='mean_squared_logarithmic_error',
optimizer='adam',metrics=['accuracy'])
The Summary looks like below
Layer (type) Output Shape Param #
=================================================================
lstm_1 (LSTM) (50, 10, 50) 12000
_________________________________________________________________
lstm_2 (LSTM) (50, 10, 30) 9720
_________________________________________________________________
lstm_3 (LSTM) (50, 20) 4080
_________________________________________________________________
dense_1 (Dense) (50, 9) 189
=================================================================
Total params: 25,989
Trainable params: 25,989
Non-trainable params: 0
I use fit_generator to train the model. I intend to use predict instead of predict_generator. I coded a custom generator using yeild. There's no issue with any of that because predict_generator works fine
model.fit_generator(generator=generator,
steps_per_epoch=250, epochs=10, shuffle=True)
When I use predict
model.predict(testX = np.zeros(50,10,9))
It throws me below error
ValueError: Cannot feed value of shape (32, 10, 9) for Tensor
'lstm_1_input:0', which has shape '(50, 10, 9)'
Now I have no clue where this 32 came from because the Input shape is (50,10,9) which is exactly what it expects.
Use
model.predict(np.random.randn(50,10,9), batch_size=50)
You are fixing the batch size to 50
via batch_input_shape(50,10,9)
However, when you are using predict
you are not passing in the batch_size
which defaults to 32. So it tries to pass in (32, 10, 9)
into (50, 10, 9)
and it fails.
Its not failing in fit_generator
because your generator
should be returning a batch of size 50.