When assembling an artificial neural net with backpropagation, the algorithm is complex and there is no stable reference point for testing.
What is the best approach to debug correctness of implemented algorithm?
I am not talking about train/validation/test phases, I would like some sort of gauge or, perhaps, a step–by–step training results/weight values to check the against the inner workings of a network, for a given configuration and training data.
What I think might help is:
Theano
or Tensor Flow
) and defining a network with the same configuration as yours - and checking results using their inner differentation frameworks.