I have an image of an image, where the inner image is inclined and has its own coordinate table (visible on its edges):
Now I need to convert a point of the actual image to a (x cm, y cm)
point on the inclined image. Say, I want to know what (x cm, y cm)
coordinates the bottom left corner ((0, img.height())
) of the actual image has. That is, if the inner image's coordinate table were to be extrapolated.
I already know:
a
of the inclined imagep1
and p2
selected manually from the inclined imagep1cm
and p2cm
for these two pointsI can get the scale of the image by comparing the two points together and getting the ratio between their cm distance and actual distance:
double dCmX = p1cm.x() - p2cm.x();
double dCmY = p1cm.y() - p2cm.y();
double dCm = sqrt(pow(dCmX, 2) + pow(dCmY, 2))
double dPointX = p1.x() - p2.x();
double dPointY = p1.y() - p2.y();
double dPoint = sqrt(pow(dPointX, 2) + pow(dPointY, 2))
double scale = dPoint / dCm;
But I have no idea how to get the coordinates of the left bottom corner in cm coordinates.
This kind of transformation (using only scale and tilt angle) is called Affine Transformation. You might want to use OpenCV
libraries for simplicity. Then the code is very simple:
double scale = dPoint / dCm;
Point center = Point( img.cols/2, img.rows/2);
Mat warpMat = getRotationMatrix2D( center, angle, scale );
Point2f src[1];
Point2f dst[1];
src[0] = Point2f(src_x, src_y);
transform(src, dst, warpMat);
Then you will have your result inside dst
. You can of course make the src
array larger and give the transform
method more points at once.
See also this for more information about opencv and affine transformation.
If you don't want to use OpenCV
you can still take a look at the documentation pages of getRotationMatrix2D
and warpAffine
and implement it by yourself, it is quite simple:
Build the matrix (warpMat
):
And then, for each point (x,y)
build the vector [x,y,1]
and do a matrix multiplication between warpMat
and [x,y,1]
: