I had tried to run stitching with OpenCV 3.0; the result is good but also gave me a black blank area at right.
for (int i = 0; i < best_matches.size(); i++)
{
//-- Get the keypoints from the good matches
FirstImageMatchPT.push_back(keypoints1[best_matches[i].queryIdx].pt);
SecondImageMatchPT.push_back(keypoints2[best_matches[i].trainIdx].pt);
}
vector<unsigned char> match_mask;
Mat MatchedImage = findHomography(SecondImageMatchPT, FirstImageMatchPT, CV_RANSAC);
cv::Mat result;
result = img_Right.clone();
warpPerspective(img_Right, result, MatchedImage, cv::Size(img_Left.cols + img_Right.cols, img_Left.rows));
cv::Mat half(result, cv::Rect(0, 0, img_Left.cols, img_Left.rows));
img_Left.copyTo(half);
The first variable of 'findHomography' is the match points of the second (I mean, the right image) and the second one is the match points of the first (the left image).
The reason why I did swapping the variables is if I run the code below, it cropped the left image and showed me only matched area of the left one plus the right image. (and even with a bigger blank area)
for (int i = 0; i < best_matches.size(); i++)
{
//-- Get the keypoints from the good matches
FirstImageMatchPT.push_back(keypoints1[best_matches[i].queryIdx].pt);
SecondImageMatchPT.push_back(keypoints2[best_matches[i].trainIdx].pt);
}
vector<unsigned char> match_mask;
Mat MatchedImage = findHomography(FirstImageMatchPT, SecondImageMatchPT, CV_RANSAC);
cv::Mat result;
result = img_Left.clone();
warpPerspective(img_Left, result, MatchedImage, cv::Size(img_Left.cols + img_Right.cols, img_Left.rows));
cv::Mat half(result, cv::Rect(0, 0, img_Right.cols, img_Right.rows));
img_Right.copyTo(half);
Could you tell me how to make a right RoI for this? And how can I cut out that blank area automatically?
You get the black region in the final image because the result
matrix is essentially a larger canvas than your stitching result can fill. You can solve this problem by defining your canvas to be of exactly the same size as your warped image/images can fill.
The homography matrix defines a planar projective transformation. With this matrix you can project one plane (right image in your case) onto another plane (the left image). Now, you can use the same matrix to predict where the four corners of your right image would project to after applying this planar projective transformation.
You are computing the homography between the two images in this line.
Mat MatchedImage = findHomography(SecondImageMatchPT, FirstImageMatchPT, CV_RANSAC);
You can use the same homography matrix (3x3) stored in MatchedImage
to estimate where the four corners of your second image would project to w.r.t. the first image. The four coordinates of your right image are as follows.
topLeft = {0.0, 0.0}, topRight = {W, 0.0},
bottomLeft = {0.0, H}, bottomRight = {W, H}
In homogeneous coordinates these would be,
topLeftH = {0.0, 0.0, 1.0}, topRightH = {W, 0.0, 1.0},
bottomLeftH = {0.0, H, 1.0}, bottomRightH = {W, H, 1.0}
You can compute the projected coordinates of these corners as follows,
projTopLeft = HomographyMatrix . topLeftH
projTopRight = HomographyMatrix . topRightH ...
This can be done using OpenCV functions as follows,
std::vector<Point2f> imageCorners(4);
imageCorners[0] = cvPoint(0,0);
imageCorners[1] = cvPoint( img_right.cols, 0 );
imageCorners[2] = cvPoint( img_right.cols, img_right.rows );
imageCorners[3] = cvPoint( 0, img_right.rows );
std::vector<Point2f> projectedCorners(4);
perspectiveTransform( imageCorners, projectedCorners, H);
Once you find the projected corners, you can compute the size of the final canvas using the new coordinates.
In your code these lines should be changed,
cv::Mat result;
result = img_Right.clone();
to
cv::Mat result(cv::Size(COMPUTED_SIZE_AS_ABOVE), img_Right.type());