Search code examples
neural-networkartificial-intelligencebias-neuron

How to use the "Bias" in Neuronal Networks


for 2 weeks now, i am working with a neuronal network. My activation function is the normal sigmoid function but there is one thing, i have read about on the internet, but found different ways of interpretations.

Currently I am adding up all input values multiplied with their weights and then adding the bias (which is the negative threshold). I took all this from http://neuralnetworksanddeeplearning.com/chap1#sigmoid_neurons It all worked pretty well for me, but then i found this page:http://www.nnwj.de/backpropagation.html

In the forward propagation part the bias is not used at all and i think it should be, so please tell me, am i just to stupid to see what they did there or which page is wrong ?

for(int v = 0; v < outputs[i].X; v++){
    outputs[i].set(v, biases[i].get(v));
        for(int k = 0; k < outputs[i-1].X; k++){
            outputs[i].increase(v, weights[i].get(v,k) * outputs[i-1].get(k));
        }
    outputs[i].set(v, sigmoid( outputs[i].get(v)));

System.out.println("Layer :" + i + "    Neuron :" + v + "    bias :" + biases[i].get(v) + "   value :" + outputs[i].get(v));

        }

This is my code for calculating my code but the part for one neuron is done in this part:

outputs[i].set(v, biases[i].get(v));
for(int k = 0; k < outputs[i-1].X; k++){
    outputs[i].increase(v, weights[i].get(v,k) * outputs[i-1].get(k));
}
outputs[i].set(v, sigmoid( outputs[i].get(v)));

Probably you will not be able to understand what exactly i did there, but i just stands for my layer, k are all the input neurons and i am iterating threw the input neurons and adding the weights with there outputs. Just befor i did that, i set my starting value to the bias.

I would be very happy if you could help me with this problem, also i am sorry for my english :)


Solution

  • In general the bias term should be included in both the forward and backward passes.

    I think in the second page you referred to the bias term is omitted in the Forwardpropagation section for simplicity, and only in the Backpropagation section it's explained why we want that additional bias term.

    The first looks like a more thoughtful tutorial than the second.