When you get the elements for the Array (the coefficients) and then use them in the standard fashion defined for linear algebra to multiply the matrix with a vector, you get a different answer than if you call the TransformPoints method for the Matrix. I believe this is an error in either the Microsoft documentation or in the implementation.
That is, given the matrix M and the Point a, where M(a) is the result of calling M.TranformPoints(a)
M(a).x != M11*ax + M12*ay + dx
and
M(a).y != M21*ax + M22*ay + dy
However, looking at all the definitions I could find for multiply a matrix and a vector and for matrix notation (i,j) where i is the row and j is the column the above equations should be equal. The only way you get the correct answer is if you switch the positions of M12 and M21 above. However, if you do this the equations no longer agree with the standard notion and common usage of multiplying a matrix and a vector (or point).
See the more detailed explanation below with the MSDN documentation and a sample program I wrote.
Matrix Constructor(Single, Single, Single, Single, Single, Single)
Syntax:
public Matrix( float m11, float m12, float m21, float m22, float dx, float dy)
Parameters:
Property Value
Remarks: The elements m11, m12, m21, m22, dx, and dy of the Matrix are represented by the values in the array in that order.
I wrote a sample program to confirm what I was seeing and it is shown below. Note, that using the elements (matrix coefficients) in equations only works if I transpose m12 and m21 which does not agree with general matrix (i,j) (row, column) notation and multiplication. My program is below. My questions is whether Microsoft's documentation/implementation is wrong? Am I overlooking something or doing something wrong?
In my code the X,Y values for PointsA[0] only equals the X and Y values for x1, y1 where the m12 and m21 element values are transposed in the equation. Code is below:
class Program
{
static void Main(string[] args)
{
var pointsA = new Point[1];
pointsA[0].X = 1;
pointsA[0].Y = 1;
var pointsB = new Point[1];
pointsB[0].X = 1;
pointsB[0].Y = 1;
// Transform PointsA using Matrix
Matrix m = new Matrix(1,1,0,1,0,0);
m.TransformPoints(pointsA);
// Transform PointsB using Elements.
var elements = m.Elements;
var m11 = elements[0];
var m12 = elements[1];
var m21 = elements[2];
var m22 = elements[3];
var dx = elements[4];
var dy = elements[5];
var pointB = pointsB[0];
var x = m11 * pointB.X + m12 * pointB.Y + dx;
var y = m21 * pointB.X + m22 * pointB.Y + dy;
// Correct answer but had to transpose positions of m12 and m21 from what would be the normal matrix x vector multiplication.
var x1 = m11*pointB.X + m21*pointB.Y + dx;
var y1 = m12*pointB.X + m22*pointB.Y + dy;
}
}
Yes the correct answer is this:
var x1 = m11*pointB.X + m21*pointB.Y + dx;
var y1 = m12*pointB.X + m22*pointB.Y + dy;
But there's no error. The description of the matrix gives us this, 3 rows, 2 columns (3x2):
[m11 m12]
[m21 m22]
[dx dy ]
When multiplied by the vector which must be 1 row, 3 columns (1x3):
[ax ay 1][m11 m12]
[m21 m22]
[dx dy ]
You get the 1x2 result:
[ax * m11 + ay * m21 + dx , ax * m12 + ay * m22 + dy]
This documentation demonstrates that they use a row-major matrices and row vectors rather than the more common mathematical convention of column-major and column vectors.
So in actual fact the matrix represents a 3x3 so that they can be chained together but this makes no difference as the final column is always 0 0 1
in an affine transformation which is all this class can represent.
[ax ay 1][m11 m12 0] = [ax * m11 + ay * m21 + dx , ax * m12 + ay * m22 + dy , 1]
[m21 m22 0]
[dx dy 1]
If I have a point (P) and I want to Translate (T) and then Scale (S) I find it more readable with this notation:
transformedPoint = P*T*S
Rather than:
transformedPoint = S*T*P
And so code that uses this order will also be more readable.