Search code examples
mathneural-networkperceptron

Using simple weights (-1, -1) and bias (2) for NAND perceptron


In most study material about perceptrons, a perceptron is defined like this.

output = 1 if w . x + b > 0 output = 0 if w . x + b <= 0

(The dot '.' in the above formulas represent the dot product.)

In most examples of NAND perceptron I have seen, the NAND perceptron is defined like these:

I am defining my NAND perceptron as follows.

  • w = [-1, -1], b = 2

Here is the proof that it works like a NAND perceptron.

x0 x1 | w0 * x0 + w1 * x1 + b | output
------+-----------------------+-------
0  0  | 2                     | 1
0  1  | 1                     | 1
1  0  | 1                     | 1
1  1  | 0                     | 0

It this a valid NAND perceptron? Is there any specific reason why existing text do not use a simple NAND perceptron like this?


Solution

  • Because it is not a good practice to draw the discriminative boundary near the sample data:enter image description here