Search code examples
neural-networkbackpropagationbias-neuron

With bias, ANN not converge anymore


I'm learning ANN, I did two script (in fortran90 and python) for simple binary classification problem.

I first did without bias, and I get a good convergence. But adding a bias for each node it does not converge anymore (or everything is going near to 0 or everything near to 1)

The bias is 1 and has a specific weight for each node. It is randomly initialized and then update adding delta such as others weights. I have tried to change gradient step size but it still doing the same thing.

Someone has any clues ?....

EDIT :

The network :

    IN                HIDDEN             OUTPUT node
(each column is a     LAYERS          (each column is
training data)   (2layers of 3node)   the wanted result)


          W1      .___W2__.    W3
|0|0|1|1|-------->|___|___|______
|0|1|0|1|--\/_-\->|___|___|______\_--> |1|1|0|0|
|1|0|1|1|--/\__/->|___|___|______/

The activation function is a sigmoid (1/(1+exp(-x)))

The weight are initialized with a normal distribution in range of [-1, 1]


Solution

  • you may have problem :

    https://datascience.stackexchange.com/questions/15602/training-my-neural-network-to-overfit-my-training-dataset

    you should also be careful of your learning step, if it's too large you can't converge.