Consider a Perceptron made of a single unit with 2 binary input X1 and X2
x1, x2, y
-1 -1 +1
-1 +1 -1
+1 -1 +1
+1 +1 -1
While I was looking at the datasets, I was wondering whether it's possible to learn the previous dataset?
Personally, I think we can because the data is linearly sepearable. What do you think?
The data is indeed linearly separable. Hence, a Perceptron trained with the Perceptron Learning Algorithm will converge on the correct solution. This is guaranteed by the Perceptron Convergence Theorem. Were the training data not linearly separable, the Perceptron Learning Algorithm would not converge, not even to an approximate solution.