Search code examples
machine-learningneural-networkbackpropagationaccord.net

Inconsistent/Different Test Performance/Error After Training Neural Network in Accord.Net


I am training a ResilientBackpropagation Neural Network with Accord.Net to get a scoring for a set of features.

The network is very simple and has:

  • 26 inputs

  • 1 hidden layer with 3 nodes

  • 1 output

I am training with:

  • SigmoidFunction
  • Random Initialization
  • train-set 3000 examples
  • validation-set 1000 examples

The Learning Curve looks on every run slightly different but this is the average case: enter image description here

My Question

If I run the training 5 times with the same parameters and validate the network on my crossvalidation-set I get 5 different F1 Scores, between 88-91%. So it is very difficult to decide when to stop with training and take the final algorithm. Is this normal? So if I want to deploy I have to run the training X times and stop once I think I have reached the best results?


Solution

  • The neural network initializes the weights randomly and will generate different networks after training and therefor give you different performance. While the training process is deterministic, the initial values are not! You may end up in different local minimums as a result or stop in different places.