Search code examples
pythontensorflowkerasdeep-learninglstm

Keras - Is There an way to reduce value gap between categorical_accuracy and val_categorical_accuracy?


I'm trying to build and train LSTM Neural Network.

Here is my code (summary version):

X_train, X_test, y_train, y_test = train_test_split(np.array(sequences), to_categorical(labels).astype(int), test_size=0.2)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2)

log_dir = os.path.join('Logs')
tb_callback = TensorBoard(log_dir=log_dir)

model = Sequential()
model.add(LSTM(64, return_sequences=True, activation='tanh', input_shape=(60,1662)))
model.add(LSTM(128, return_sequences=True, activation='tanh', dropout=0.31))
model.add(LSTM(64, return_sequences=False, activation='tanh'))
model.add(Dense(32, activation='relu'))
model.add(Dense(len(actions), activation='softmax'))

model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['categorical_accuracy'])

val_dataset = tf.data.Dataset.from_tensor_slices((X_val, y_val)) # default slice percentage check 
val_dataset = val_dataset.batch(256)

model.fit(X_train, y_train, batch_size=256, epochs=250, callbacks=[tb_callback], validation_data=val_dataset)

And model fit result:

Epoch 248/250
8/8 [==============================] - 2s 252ms/step - loss: 0.4563 - categorical_accuracy: 0.8641 - val_loss: 2.1406 - val_categorical_accuracy: 0.6104
Epoch 249/250
8/8 [==============================] - 2s 255ms/step - loss: 0.4542 - categorical_accuracy: 0.8672 - val_loss: 2.2365 - val_categorical_accuracy: 0.5667
Epoch 250/250
8/8 [==============================] - 2s 234ms/step - loss: 0.4865 - categorical_accuracy: 0.8562 - val_loss: 2.1668 - val_categorical_accuracy: 0.5875

I wanna reduce value gap between categorical_accuracy and val_categorical_accuracy.

Can I know how to do it?

Thank you for reading my article.


Solution

  • When there is so much difference between your train and validation data this mean your model is overfitting.

    So look for how prevent from overfitting. Usually what you have to do is add more data to your dataset.

    It won’t work every time, but training with more data can help algorithms detect the signal better.

    Try to stop before overfit

    another aspect is to try stop the model and reduce the learning rate