Search code examples
pythonimageopencvdistortion

Python - OpenCV: calibrateCamera returning camera matrix, but it is nonsensical


I am trying to get rid of barrel and other distortive effects in images to apply specifically to coordinates. I am using openCV with the chessboard, I have managed to get accurate corners - however, when I apply these corners I find that they do not return what I expect.

image: the orginal image: calibrationImage.bmp

import cv2
import numpy as np
img = cv2.imread('calibrationImage.bmp')
corners = array([[[136.58304, 412.18762]],

       [[200.73372, 424.21613]],

       [[263.41006, 431.9114 ]],

       [[334.     , 437.     ]],

       [[405.     , 436.     ]],

       [[470.78467, 428.75998]],

       [[530.23724, 420.48328]],

       [[152.61916, 358.20523]],

       [[210.78505, 368.59222]],

       [[270.52335, 371.8065 ]],

       [[335.67096, 373.8901 ]],

       [[400.88788, 373.57782]],

       [[462.57724, 371.10867]],

       [[517.49524, 366.26855]],

       [[168.55394, 310.78973]],

       [[228.     , 321.     ]],

       [[277.43225, 319.48358]],

       [[336.7225 , 320.90256]],

       [[396.0194 , 321.13016]],

       [[452.47888, 320.15744]],

       [[503.7933 , 318.09518]],

       [[183.49014, 270.53726]],

       [[231.8806 , 273.96835]],

       [[283.5549 , 275.63623]],

       [[337.41528, 276.47876]],

       [[391.28375, 276.99832]],

       [[442.8828 , 277.16376]],

       [[490.67108, 276.5398 ]],

       [[196.86388, 236.63716]],

       [[241.56177, 238.3809 ]],

       [[288.93515, 239.1635 ]],

       [[337.9244 , 239.63228]],

       [[386.90695, 240.31389]],

       [[434.21832, 241.17548]],

       [[478.62744, 241.05113]],

       [[208.81688, 208.1463 ]],

       [[250.11485, 208.97067]],

       [[293.5653 , 208.92986]],

       [[338.2928 , 209.22559]],

       [[382.94626, 209.92468]],

       [[426.362  , 211.03403]],

       [[467.76523, 210.82764]],

       [[219.20187, 184.123  ]],

       [[257.52582, 184.09167]],

       [[297.4925 , 183.80571]],

       [[338.5172 , 183.91574]],

       [[379.46725, 184.64926]],

       [[419.45697, 185.74242]],

       [[457.93872, 185.08537]],

       [[228.31578, 163.70671]],

       [[263.87802, 163.11162]],

       [[300.8062 , 162.71281]],

       [[338.686  , 162.79945]],

       [[376.43716, 163.36848]],

       [[413.39032, 164.23444]],

       [[449.21677, 163.16547]]], dtype=float32)

w, h = 7, 8
objp = np.zeros((h*w, 3), np.float32)
objp[:, :2] = np.mgrid[0:w, 0:h].T.reshape(-1, 2)
img_points = []
obj_points = []
img_points.append(corners)
obj_points.append(objp)
image_size = (img.shape[1], img.shape[0])

ret, mtx, dist, rvecs, tvecs = (obj_points, img_points, image_size, None, None)

updatedCorners = cv2.undistortPoints(corners, mtx, dist, P=mtx)
updatedCorners = updatedCorners.reshape([56,2])

ret = True
checkers = cv2.drawChessboardCorners(img, (7, 8), corners, ret)

fig, (img_ax) = plt.subplots(1, 1, figsize=(12,12))
img_ax.imshow(checkers)
img_ax.scatter(updatedCorners.T[0], updatedCorners.T[1], c='orange')

I was trying see how good the calibration was by plotting corners run through the undistort function. however, when I plot them they are all over the place

strange undistorted points in orange

Does anyone know what has gone wrong?


Solution

  • cv2.undistortPoints expects the camera matrix and distortion coefficients retrieved from calibration. You are supplying the wrong information to it. You currently have the camera matrix and distortion coefficients set to the object points and image size. You can also remove P. You would only specify this if you intend to map the undistorted points to another coordinate system. Since you are double checking what the undistorted points look like, specifying P as the same camera matrix you found earlier would simply map it back to where you originally found the points which is not what you're after.

    Here is a minimal working example:

    import cv2
    import numpy as np
    
    camera_matrix = np.array([[1300., 0., 600], [0., 1300., 480.], [0., 0., 1.]], dtype=np.float32)
    
    dist_coeffs = np.array([-2.4, 0.95, -0.0004, 0.00089, 0.], dtype=np.float32)
    
    test = np.zeros((10, 1, 2), dtype=np.float32)
    xy_undistorted = cv2.undistortPoints(test, camera_matrix, dist_coeffs)
    
    print(xy_undistorted)
    

    The camera matrix is a 3 x 3 matrix retrieved from calibration, followed by the distortion coefficients being a 1D NumPy array. test is a 3D NumPy array with a singleton second dimension. Ensure that every variable is of type np.float32, then run the function.

    However I am skeptical that you will obtain decent results with just one perspective. You usually need more if you are calibrating a camera subject to large distortion. Nevertheless, the above is what you need to get the method working.