I've been working a lot with OpenPnP recently (which is great!), and have had my interest peaked enough to try and really understand it's workings.
I am trying to produce a basic transformation of a PCB with 3 fiducials, and 3 features, in C# with OpenCVSharp.
I've modelled the geometry in some CAD software to sanity check again, here is my 'PCB'.
F1-3 are the 3 fiducials, with F1 being 0,0 of the PCB coordinate system. P1-3 are the 3 'features' on the PCB that I'm interested in.
Here is that 'PCB' superimposed on a 'machine', where the green dot is the machine's 0,0. So from this I've essentially got already what the 3 features locations would be, relative to the machine 0,0.
I think I need to do an 'affine transform', and over a few hours have pulled this together.
using OpenCvSharp;
namespace affinetransform
{
class Program
{
static void Main(string[] args)
{
// Define fiducials in PCB coordinates
Point2f[] pcbFiducials = new Point2f[]
{
new Point2f(0, 0),
new Point2f(100, 0),
new Point2f(100, -80)
};
// Corresponding fiducial points as measured in machine coordinates
Point2f[] cameraFiducials = new Point2f[]
{
new Point2f(190.62f, -83.7f),
new Point2f(290.24f, -74.99f),
new Point2f(297.21f, -154.68f)
};
// Compute the affine transformation matrix
Mat affineTransform = Cv2.GetAffineTransform(InputArray.Create(pcbFiducials), InputArray.Create(cameraFiducials));
// Define the 3 'features' on the PCB we're interested in, in PCB coordinates
Point2f[] pcbFeatures = new Point2f[]
{
new Point2f(24, -18),
new Point2f(35, -62),
new Point2f(74, -28)
};
// Convert feature points to a Mat object, because we need to pass Mat type to the Transform method
Mat pcbFeaturesMat = new Mat(pcbFeatures.Length, // Rows
1, // Columns
MatType.CV_32FC2,
pcbFeatures); // The features on the PCB we want to transform
for (int i = 0; i < pcbFeaturesMat.Rows; i++)
{
for (int j = 0; j < pcbFeaturesMat.Cols; j++)
{
Console.Write($"{pcbFeaturesMat.At<float>(i, j)}\t");
}
Console.WriteLine();
}
// Transform feature points to machine/camera coordinates
Mat cameraFeaturesMat = new Mat();
Cv2.Transform(pcbFeaturesMat, cameraFeaturesMat, affineTransform);
// Save the transformed points to a CSV file
SavePointsToCsv(cameraFeaturesMat, @"C:\users\user\desktop\cameraFeatures.csv");
}
static void SavePointsToCsv(Mat points, string filename)
{
using (var writer = new System.IO.StreamWriter(filename))
{
writer.WriteLine("X,Y");
for (int i = 0; i < points.Rows; i++)
{
float x = points.At<float>(i, 0);
float y = points.At<float>(i, 1);
writer.WriteLine($"{x},{y}");
}
}
Console.WriteLine($"Saved transformed points to {filename}");
}
}
}
And this is the transformed data it outputs, which is not completely wrong. As you can see from my images, 216.09, 230.89 and 266.78 are the 3 correct transformed values of X in that table, but in the wrong positions - with the remaining 3 data points just being the machine X position of F1, repeated twice, and a huge number.
X,Y
216.09705,230.88875
230.88875,266.7783
266.7783,1.11E-43
The affine transform matrix itself computed by this code is:
Transformation Matrix:
0.99619995117187, -0.08712501525878906, 190.6199951171875
0.08709999084472657, 0.9961249351501464, -83.69999694824219
Completely stumped, and hoping some smart OpenCV folk can point me in the right direction!
Ok, got it. Looks like the transform was calculated correctly all along, but I was printing it incorrectly.
After computing the transform, I now convert the result back to Point2f[] which are much easier to work with, then print them out.
using OpenCvSharp;
using System;
namespace AffineTransform
{
class Program
{
static void Main(string[] args)
{
// Define the locations of the fiducials on the PCB in PCB coordinates
Point2f[] pcbFiducials = new Point2f[]
{
new Point2f(0, 0),
new Point2f(100, 0),
new Point2f(100, -80)
};
// Define the same fiducial points as measured in machine coordinates, i.e. the position the PCB sits in the machine
Point2f[] cameraFiducials = new Point2f[]
{
new Point2f(190.62f, -83.7f),
new Point2f(290.24f, -74.99f),
new Point2f(297.21f, -154.68f)
};
// Compute the affine transformation matrix
Mat affineTransform = Cv2.GetAffineTransform(pcbFiducials, cameraFiducials);
// Print the affine transformation matrix to check it's correctness
Console.WriteLine("Affine Transformation Matrix:");
for (int i = 0; i < affineTransform.Rows; i++)
{
for (int j = 0; j < affineTransform.Cols; j++)
{
Console.Write($"{affineTransform.At<double>(i, j)}\t");
}
Console.WriteLine();
}
// Define the 3 'features' on the PCB we're interested in transforming to machine coordinates, in PCB coordinates
Point2f[] pcbFeatures = new Point2f[]
{
new Point2f(24, -18),
new Point2f(35, -62),
new Point2f(74, -28)
};
// Convert feature points to a Mat object, as Mat is needed to pass to Transform() later
Mat pcbFeaturesMat = new Mat(pcbFeatures.Length, 1, MatType.CV_32FC2, pcbFeatures);
// Transform feature points to machine/camera coordinates, same reason as above
Mat cameraFeaturesMat = new Mat();
// Compute the actual transform
Cv2.Transform(pcbFeaturesMat, cameraFeaturesMat, affineTransform);
// Convert the result back to Point2f[], to make them much easier to actually use
Point2f[] cameraFeatures = new Point2f[cameraFeaturesMat.Rows];
for (int i = 0; i < cameraFeaturesMat.Rows; i++)
{
cameraFeatures[i] = new Point2f(cameraFeaturesMat.At<Point2f>(i).X, cameraFeaturesMat.At<Point2f>(i).Y);
}
// Print the transformed points
Console.WriteLine("Transformed Points:");
foreach (var point in cameraFeatures)
{
Console.WriteLine($"X = {point.X}, Y = {point.Y}");
}
// Save the transformed points to a CSV file
SavePointsToCsv(cameraFeatures, @"C:\users\user\desktop\cameraFeatures.csv");
}
static void SavePointsToCsv(Point2f[] points, string filename)
{
using (var writer = new System.IO.StreamWriter(filename))
{
writer.WriteLine("X,Y");
foreach (var point in points)
{
writer.WriteLine($"{point.X},{point.Y}");
}
}
Console.WriteLine($"Saved transformed points to {filename}");
}
}
}