Search code examples
machine-learningautoencoder

The number of epochs used in an autoencoder depends on the dimension of the dataset?


I develop a simple autoencoder and to find the right parameters I use a grid search on a small subset of dataset. The number of epochs in output can be used on the training set with higher dimension? The number of epochs depends on the dimension of dataset? or not? E.g. I have much more epochs in a dataset with a big dimension and a lower number of epochs for a small dataset


Solution

  • In general yes, the number of epochs will change if the dataset is bigger.

    The number of epochs should not be decided a-priori. You should run the training and monitor the training and validation losses over time and stop training when the validation loss reaches a plateau or start increasing. This technique is called "early stopping" and is a good practice in machine learning.