Search code examples
c++opencvcamera-calibrationaruco

Bad axis calculation with arucos


I'm looking at examples of OpenCV with wrinkles and how to identify the position.

It marks the lines well for me, but the orientation does not, the Z axis always goes to the upper left corner.axis calculation

This is my code:

float markerLength = 0.2;

    // Set coordinate system
    cv::Mat objPoints(4, 1, CV_32FC3);
    objPoints.ptr<cv::Vec3f>(0)[0] = cv::Vec3f(-markerLength/2.f,  markerLength/2.f, 0);
    objPoints.ptr<cv::Vec3f>(0)[1] = cv::Vec3f( markerLength/2.f,  markerLength/2.f, 0);
    objPoints.ptr<cv::Vec3f>(0)[2] = cv::Vec3f( markerLength/2.f, -markerLength/2.f, 0);
    objPoints.ptr<cv::Vec3f>(0)[3] = cv::Vec3f(-markerLength/2.f, -markerLength/2.f, 0);

    cv::Mat cameraMatrix(3,3,cv::DataType<double>::type);
    cv::setIdentity(cameraMatrix);
    cv::Mat distCoeffs(4,1,cv::DataType<double>::type);
    distCoeffs.at<double>(0) = 0;
    distCoeffs.at<double>(1) = 0;
    distCoeffs.at<double>(2) = 0;
    distCoeffs.at<double>(3) = 0;

    std::vector<cv::Vec3d> rvecs, tvecs;

    while (m_videoCap.isOpened()) {
        m_videoCap >> m_frame;

        if (!m_frame.empty()) {
            // Draw marker centers
            cv::Mat outputImage = m_frame.clone();

            std::vector<int> markerIds;
            std::vector<std::vector<cv::Point2f>> markerCorners, rejectedCandidates;
            cv::Ptr<cv::aruco::DetectorParameters> parameters = cv::aruco::DetectorParameters::create();
            cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_250);
            cv::aruco::detectMarkers(m_frame, dictionary, markerCorners, markerIds, parameters, rejectedCandidates);
            cv::aruco::drawDetectedMarkers(outputImage, markerCorners);

            int nMarkers = markerCorners.size();
            std::vector<cv::Vec3d> rvecs(nMarkers), tvecs(nMarkers);

            for (int i = 0; i < nMarkers; i++) {
                auto& corners = markerCorners[i];

                cv::solvePnP(objPoints, corners, cameraMatrix, distCoeffs, rvecs.at(i), tvecs.at(i));
                cv::drawFrameAxes(outputImage, cameraMatrix, distCoeffs, rvecs[i], tvecs[i], 0.1, 2);
            }

            m_Pixmap = cvMatToQPixmap(outputImage);
            emit newPixmapCaptured();
        }
    }

Does anyone know what I'm doing wrong?


EDIT: I've changed the camera initialization to this one as suggested Christoph Rackwitz:

    // f[px] = x[px] * z[m] / x[m]
    float focalLen = 950 * 1.3f / 0.45f;
    cv::Matx33f cameraMatrix(focalLen, 0.0f,     (1280-1) / 2.0f,
                             0.0f,     focalLen, (780-1) / 2.0f,
                             0.0f,     0.0f,     1.0f);

And now it works fine:

enter image description here

Thanks for your help.


Solution

  • cv::Mat cameraMatrix(3,3,cv::DataType<double>::type);
    cv::setIdentity(cameraMatrix);
    

    This is insufficient. The camera matrix must contain a sensible focal length as well as optical center.

    A proper camera matrix looks like

    [[  f,  0, cx],
     [  0,  f, cy],
     [  0,  0,  1]]
    

    You can get that entire matrix (and distortion coefficients) from calibration, but that's hard.

    You can also just calculate those values. That'll be close enough.

    • Optical center: cx = (width-1) / 2 and similarly for cy.

    • Focal length

      1. Take a picture of some easily measured object, like... an aruco marker, or a yard stick
      2. Measure its physical distance z[m] to the camera and the physical length x[m] of it
      3. Measure its length in pixels x[px]
      4. Calculate f[px] = x[px] * z[m] / x[m]

    You can forget about distortion coefficients for now. Set them all to 0. Those will be relevant if your camera has noticeable pincushion or barrel distortion from its lens.


    You can use Mat::zeros and Mat::eye to initialize your matrices.

    You can generate a Mat from literal element values with predefined fixed-size matrix types like Matx33f

    Matx33f K(1, 2, 3,
              4, 5, 6,
              7, 8, 9);
    

    Or using Mat_:

    Mat K = (Mat_<float>(3,3) <<
        1, 2, 3,
        4, 5, 6,
        7, 8, 9);
    

    It looks like estimatePoseSingleMarkers got deprecated. That must have happened with the v4.7 release or maybe the v4.6 release already.

    Docs recommend using solvePnP.

    The advantage of that is: you get to decide the marker's coordinate system, i.e. where the origin lies (center or corner) and which way the axes point.

    Downside: it's a little inconvenient to be expected to generate the object points.


    OpenCV's aruco module is still kind of a mess. There's an enum called PatternPositionType (used in EstimateParameters). They use the terms "clockwise" and "counter-clockwise", while assuming that's relative to a coordinate system with Z going into the surface of the marker. Better terms would have been "positive" and "negative" rotation around the Z axis.