Search code examples
pythonnlp

Training and validation loss and accuracy in lstm


I am doing text classification with 3 classes, after dealing with over fit model, the image below is my model accuracy and loss results after adding regulization l2, now it means my models is learning ?enter image description here


Solution

  • You're dealing with an overfit even after you've added the L2 regularization. Your model is not learning much after the third epoch. Some solutions you should consider:

    • Increasing the value of your L2 regularization
    • Decreasing the number of weights in your model, to support your dataset size
    • Increasing your dropout regularization amount
    • Transfer learning from a different classification task (for example, you can pre-train your model to classify news articles in your language to different categories)
    • Increase the size of your dataset with data augmentations

    Good Luck!