The problem here is to draw a Neural Network (NN) with perceptrons that, without the need for backpropagation learning, can distinguish the pink and the green zones in the two-dimensional (2d) chart below. The two zones take up the entire 2d XY space.
The inputs to the NN are the X and Y values. The output of the NN must be:
I know that the solution to the problem is as follows, but I can't figure out why the weights W13, W23 and bias b3 are: 1, 1, and -1.9, respectively (and why does b3 have to be greater than -2 and less or equal than -1).
I wonder if they're determined by intuition, but I can't believe there isn't a more deterministic process of knowing how to calculate the values.
How do I calculate the values of the weights (W13, W23) and the bias (b3)?
You have hint in your last image: A∧B, i.e. your output perceptron implements logical AND function. So, let's see why is that.
From your first image you have 2 decision boundaries of your hidden layer perceptrons "A" and "B". "A" line (hyperplane) shows when "A" perceptron fires (gives logical "1"). On every X and Y values combination lying on the right side of that line gives output of 1, and otherwise it gives 0. Likewise for "B" perceptron. So, your final decision boundary is intersection of those 2 decision boundaries, i.e. A & B (conjunction). In other words, X and Y pair combination should be on the right side of both lines (pink zone) to have output of 1.
Now, you just need to implement logical AND function with perceptron. To do so, you can write truth table for f=A&B.
A B f
0 0 0
0 1 0
1 0 0
1 1 1
from truth table you can write following system of inequalities for your output perceptron:
0*W13 + 0*W23 + b3 <= 0
0*W13 + 1*W23 + b3 <= 0
1*W13 + 0*W23 + b3 <= 0
1*W13 + 1*W23 + b3 > 0
simplifying:
b3 <= 0
W23 + b3 <= 0
W13 + b3 <= 0
W13 + W23 + b3 > 0
as you can see it has infinite number of solutions.
If we choose W13=W23=1
, then b3>-2 and b3<=-1
or b∈(-2;-1]
Intuitively instead of solving the system of inequalities, after the identification of last layer function, which is logical AND (or intersection of hidden layer perceptrons), you can pick the values in your head with following logic. We want the output of perceptron to fire (to be logical "1") only when "A" and "B" is 1, thus W13 and W23 should be such, that the sum of them is greater than b3, but each of them is less then b3 in absolute values, then take negative of b3.