Search code examples
c++opencvtransformationhomographyransac

Using estimateRigidTransform instead of findHomography


The example in the link below is using findHomography to get the transformation between two sets of points. I want to limit the degrees of freedom used in the transformation so want to replace findHomography with estimateRigidTransform.

http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homography

Below I use estimateRigidTransform to get the transformation between the object and scene points. objPoints and scePoints are represented by vector <Point2f>.

Mat H = estimateRigidTransform(objPoints, scePoints, false);

Following the method used in the tutorial above, I want to transform the corner values using the transformation H. The tutorial uses perspectiveTransform with the 3x3 matrix returned by findHomography. With the rigid transform it only returns a 2x3 Matrix so this method cannot be used.

How would I transform the values of the corners, represented as vector <Point2f> with this 2x3 Matrix. I am just looking to perform the same functions as the tutorial but with less degrees of freedom for the transformation. I have looked at other methods such as warpAffine and getPerspectiveTransform as well, but so far not found a solution.

EDIT:

I have tried the suggestion from David Nilosek. Below I am adding the extra row to the matrix.

Mat row = (Mat_<double>(1,3) << 0, 0, 1);
H.push_back(row);

However this gives this error when using perspectiveTransform.

OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0)) in create, file /Users/cgray/Downloads/opencv-2.4.6/modules/core/src/matrix.cpp, line 1486
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Users/cgray/Downloads/opencv-2.4.6/modules/core/src/matrix.cpp:1486: error: (-215) mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0) in function create

ChronoTrigger suggested using warpAffine. I am calling the warpAffine method below, the size of 1 x 5 is the size of objCorners and sceCorners.

warpAffine(objCorners, sceCorners, H, Size(1,4));

This gives the error below, which suggests the wrong type. objCorners and sceCorners are vector <Point2f> representing the 4 corners. I thought warpAffine would accept Mat images which may explain the error.

OpenCV Error: Assertion failed ((M0.type() == CV_32F || M0.type() == CV_64F) && M0.rows == 2 && M0.cols == 3) in warpAffine, file /Users/cgray/Downloads/opencv-2.4.6/modules/imgproc/src/imgwarp.cpp, line 3280

Solution

  • I've done it this way in the past:

    cv::Mat R = cv::estimateRigidTransform(p1,p2,false);
    
        if(R.cols == 0)
        {
            continue;
        }
    
        cv::Mat H = cv::Mat(3,3,R.type());
        H.at<double>(0,0) = R.at<double>(0,0);
        H.at<double>(0,1) = R.at<double>(0,1);
        H.at<double>(0,2) = R.at<double>(0,2);
    
        H.at<double>(1,0) = R.at<double>(1,0);
        H.at<double>(1,1) = R.at<double>(1,1);
        H.at<double>(1,2) = R.at<double>(1,2);
    
        H.at<double>(2,0) = 0.0;
        H.at<double>(2,1) = 0.0;
        H.at<double>(2,2) = 1.0;
    
    
        cv::Mat warped;
        cv::warpPerspective(img1,warped,H,img1.size());
    

    which is the same as David Nilosek suggested: add a 0 0 1 row at the end of the matrix

    this code warps the IMAGES with a rigid transformation.

    I you want to warp/transform the points, you must use perspectiveTransform function with a 3x3 matrix ( http://docs.opencv.org/modules/core/doc/operations_on_arrays.html?highlight=perspectivetransform#perspectivetransform )

    tutorial here:

    http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html

    or you can do it manually by looping over your vector and

    cv::Point2f result;
    result.x = point.x * R.at<double>(0,0) + point.y * R.at<double>(0,1) + R.at<double>(0,2);
    result.y = point.x * R.at<double>(1,0) + point.y * R.at<double>(1,1) + R.at<double>(1,2);
    

    hope that helps.

    remark: didn't test the manual code, but should work. No PerspectiveTransform conversion needed there!

    edit: this is the full (tested) code:

    // points
    std::vector<cv::Point2f> p1;
    p1.push_back(cv::Point2f(0,0));
    p1.push_back(cv::Point2f(1,0));
    p1.push_back(cv::Point2f(0,1));
    
    // simple translation from p1 for testing:
    std::vector<cv::Point2f> p2;
    p2.push_back(cv::Point2f(1,1));
    p2.push_back(cv::Point2f(2,1));
    p2.push_back(cv::Point2f(1,2));
    
    cv::Mat R = cv::estimateRigidTransform(p1,p2,false);
    
    // extend rigid transformation to use perspectiveTransform:
    cv::Mat H = cv::Mat(3,3,R.type());
    H.at<double>(0,0) = R.at<double>(0,0);
    H.at<double>(0,1) = R.at<double>(0,1);
    H.at<double>(0,2) = R.at<double>(0,2);
    
    H.at<double>(1,0) = R.at<double>(1,0);
    H.at<double>(1,1) = R.at<double>(1,1);
    H.at<double>(1,2) = R.at<double>(1,2);
    
    H.at<double>(2,0) = 0.0;
    H.at<double>(2,1) = 0.0;
    H.at<double>(2,2) = 1.0;
    
    // compute perspectiveTransform on p1
    std::vector<cv::Point2f> result;
    cv::perspectiveTransform(p1,result,H);
    
    for(unsigned int i=0; i<result.size(); ++i)
        std::cout << result[i] << std::endl;
    

    which gives output as expected:

    [1, 1]
    [2, 1]
    [1, 2]