Search code examples
machine-learningneural-networkbackpropagation

How does backpropagation work?


I created my first simple Neural Net on the paper. It has 5 inputs(data - float number from 0.0 to 10.0) and one output. Without hidden layers. For example at start my weights = [0.2, 0.2, 0.15, 0.15, 0.3]. Result should be in range like input data(0.0 - 10.0). For example network returned 8 when right is 8.5. How backprop will change weights? I know how grad.descent works but I can't understand how i should choose parameters of partial derivative. Help, please. I can elaborate something if you need. If you advise some literature (if possible then in simple English).


Solution

  • If you first start with 1, then continue to 2 and 3 respectively. I believe that you will be able to have a pretty strong understanding of how neural networks work.

    1. Andrew Ng's Coursera videos, especially Lecture 9.1, Lecture 9.2, Lecture 9.4 and the others.
    2. Tom Michell's Machine Learning book's 4th Chapter
    3. Raul Rojas' Neural Networks, a Systematic Introduction, Chapter 4, 6 and 7. This is long, although very easy to follow and understand. Also it is a very nice and complete book (also freely available from the author's website).

    It's essential to start with understanding how one single perceptron is learned (which is what you have done). Once that is done, the others will not be too difficult.