Search code examples
artificial-intelligenceneural-networkperceptron

What's the point of the threshold in a perceptron?


I'm having trouble seeing what the threshold actually does in a single-layer perceptron. The data is usually separated no matter what the value of the threshold is. It seems a lower threshold divides the data more equally; is this what it is used for?


Solution

  • Actually, you'll just set threshold when you aren't using bias. Otherwise, the threshold is 0.

    Remember that, a single neuron divides your input space with a hyperplane. Ok?

    Now imagine a neuron with 2 inputs X=[x1, x2], 2 weights W=[w1, w2] and threshold TH. The equation shows how this neuron works:

    x1.w1 + x2.w2 = TH
    

    this is equals to:

    x1.w1 + x2.w2 - 1.TH = 0
    

    I.e., this is your hyperplane equation that will divides the input space.

    Notice that, this neuron just work if you set manually the threshold. The solution is change TH to another weight, so:

    x1.w1 + x2.w2 - 1.w0 = 0
    

    Where the term 1.w0 is your BIAS. Now you still can draw a plane in your input space without set manually a threshold (i.e, threshold is always 0). But, in case you set the threshold to another value, the weights will just adapt themselves to adjust equation, i.e., weights (INCLUDING BIAS) absorves the threshold effects.