I am new in this field, and still tinkering with other's codes to see how they work. This code is from https://github.com/mwitiderrick/stockprice I tried to declare the model in another format as follow
model = Sequential([
LSTM(units = 50, return_sequences=True,input_shape = (X_train.shape[1],1)),
Dropout(0.2),
LSTM(units =50,return_sequences=True),
Dropout(0.2),
LSTM(units =50,return_sequences=True),
Dropout(0.2),
LSTM(units =50,return_sequences=True),
Dropout(0.2),
Dense(units=1)
])
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
model.fit(X_train, y_train, epochs=1, batch_size = 32)
Then use this code to predict the output
predicted_stock_price = model.predict(X_test)
However, the predicted_stock_price.shape
show (16, 60, 1)
meanwhile the original code with this format
# Initialising the RNN
regressor = Sequential()
# Adding the first LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
regressor.add(Dropout(0.2))
# Adding a second LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
# Adding a third LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
# Adding a fourth LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.2))
# Adding the output layer
regressor.add(Dense(units = 1))
# Compiling the RNN
regressor.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Fitting the RNN to the Training set
regressor.fit(X_train, y_train, epochs = 1, batch_size = 32)
show (16,1)
shape
What could have caused this? The other lines are the same, Thanks in advance
Remove return_sequences=True
from the fourth LSTM layer