Source: http://datasciencelab.wordpress.com/2014/01/10/machine-learning-classics-the-perceptron/
"The general equation of a line given two points in it, (x1,y2)
and (x2,y2)
, is A + Bx + Cy = 0
where A
, B
, C
can be written in terms of the two points. Defining a vector V = (A, B, C)
, any point (x,y)
belongs to the line if V'x = 0
, where x = (1,x,y)
. Points for which the dot product is positive fall on one side of the line, negatives fall on the other."
I don't quite understand how it works. Also, this line in particular:
self.V = np.array([xB*yA-xA*yB, yB-yA, xA-xB])
Why is Bx
determined by yb-ya
?
For what its worth, I'm learning Linear Algebra, so I'm quite familiar with the mathematical concept (I realize it is meant to be a normal), but how it is done escapes me.
The text assumes the line equation to be: A + B*x + C*y = 0
Let's say we have two points from that line, P1(x1, y1)
and P2(x2, y2)
Using the two-points form of the line equation, you will get y - y1 = [(y2 - y1)/(x2 - x1)] * (x - x1)
(based on P1
and P2
)
The same equation, can be developed into [(x2 -x1)*y1 + (y1 - y2)*x1] + (y2 - y1) * x + (x1 - x2) * y = 0
Looking at A + B*x + C*y = 0
, you see that:
A
, is [(x2 -x1)*y1 + (y1 - y2)*x1] = x2*y1 - y2*x1
B
, the coefficient of x
is (y2 - y1)
C
, the coefficient of y
is (x1 - x2)
hence the value np.array([A, B, C])
in the source code appears as np.array([xB*yA - xA*yB, yB - yA, xA - xB])