I am trying to perform a 10-fold cross-validation on a LSTM, the code is the following:
# Initialising the RNN
regressor = Sequential()
# Adding the first LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 350, return_sequences = True, input_shape = (X_train1.shape[1], len(columns1))))
regressor.add(Dropout(0.5))
# Adding a second LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 350, return_sequences = True))
regressor.add(Dropout(0.5))
# Adding a third LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 350, return_sequences = True))
regressor.add(Dropout(0.5))
# Adding a fourth LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 350))
regressor.add(Dropout(0.5))
# Adding the output layer
regressor.add(Dense(units = 1))
# Compiling the RNN
regressor.compile(optimizer = 'rmsprop', loss = 'mean_squared_error',metrics=['accuracy'])
# RNN TRAINING
kfold = KFold(n_splits=10, shuffle=True, random_state=0)
val_accuracies = []
test_accuracies = []
i = 1
df_metrics = pd.DataFrame()
kfold.split(X_train1, y_train1)
#for train_index, test_index in kfold.split(disease_df):
for train_index, test_index in kfold.split(X_train1, y_train1):
#callback = EarlyStopping(monitor='val_accuracy', patience=10,restore_best_weights=True)
# Fitting the RNN to the Training set (RUN/TRAIN the model)
history = regressor.fit(X_train1, y_train1, epochs = 100, batch_size = 25, validation_split = 0.1, callbacks=[EarlyStopping('val_accuracy', mode='max',patience=5)])
i+=1
The idea is to perform a 10-fold cross-validation with an EarlyStopping based on the lack of improvements on the validation accuracy. The first fold runs perfectly, but everytime the second fold is supposed to begin, I receive the error:
ValueError: Input 0 of layer sequential_3 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 68)
A note about my input:
X_train1.shape[1] = 1
len(columns1) = 68
So for some reasons, when the second fold begins X_train1.shape[1] appears to be equal to None. Has this ever happened to you? Thanks!
I can see straight away some strange things in the cycle you aim to implement. i think you can safely get rid of the
kfold.split(X_train1, y_train1)
before the for loop.
Then, you are not selecting the split istances but are just feeding the whole dataset X_train1. This looks better:
from sklearn.model_selection import KFold
kf = KFold(n_splits=2)
for train_index, test_index in kf.split(X_train1):
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X_train1[train_index], X_train1[test_index]
y_train, y_test = y_train1[train_index], y_train1[test_index]