Search code examples
c#image-processingcomputer-visioncamera-calibrationdistortion

Barrel Distortion - Correcting image when expected/received control points are known


I'm working with a mounted industrial CCD camera and I have no information about its parameters. When an image is taken programmatically over WinUSB, the result in figure 1 is received. What you will notice is that the gaps between the lines differ greatly in the image. This is not the case in the actual image.

I have a technique for determining the location of the lines and have a list of pixel coordinates for where the lines must occur in a non-distorted image.

So I have

  • The pixel coordinate of the lines when the image is taken
  • The pixel coordinates of where the lines should be

What I need to do

  • Use these values to apply to every subsequent image taken with the camera, so that each image is corrected.

However, I am pretty stuck on exisiting techniques which follow this approach. I know many algorithms exist on the internet which either make use of lens parameters or a strength parameter, but these techniques aren't very suitable in my scenario. The parameters aren't known and adjusting a strength value by the eye is not accurate enough.

Any pointers on techniques would be of great help; as I'm currently at a loss.


enter image description here

Figure 1. Distorted image taken by fixed location CCD camera



Solution

  • Hum, can you explain why the standard calibration techniques aren't suitable? You don't need to know the "true" camera parameters, but you do need to estimate the linear (actually, affine) part of the distortion, which is almost the same thing.

    Explanation: assuming you are dealing with a plain old spherical-like lens, the first model I'd try for your case is a two-parameter radial distortion of the form:

    X = f * |x - c|
    Y = k1 * X^2 + k2 * X^4
    y = c + Y / f
    

    where

    • x = (u, v) are the distorted pixel coordinates;
    • c = (cu, cv) is an unknown center of distortion (i.e. the place in the image with zero distortion, usually on (or very close to) the lens's focal axis.
    • |x -c| is the radial distance of x from c in the distorted image
    • f is an unknown scale factor
    • X is the location of the distorted pixel in scaled-centered coordinates
    • k1 and k2 are unknown distortion coefficients
    • Y is the undistorted pixel in scaled-centered coordinates
    • y is the undistorted pixel, located on the same radius c->x as x, at a distance Y/f from c.

    So your unknowns are cu, cv, f, k1 and k2. It's starting to look like a camera calibration problem, isn't it?

    Except you don't really need to estimate a "true" focal length f, since (I assume) you are not interested in computing rays in 3D space. So you can simplify the problem by assigning f as the value that makes the diameter of your data point distribution equal to, say, 2, so that all the centered-scaled points X will have coordinates no larger than 1.0 in absolute value. This helps in two ways: it improves the numerical conditioning of the problem, and drops the number of unknowns to 4.

    You can usually initialize the estimation by using the center of your image for c, and zero values for k1 and k2, plug your data in your favorite least-squares optimizer, run, get the solution for the unknowns, and verify that it makes sense (on additional independent images). Rinse and repeat until you get something satisfactory.

    Note that you can enrich the data set used for the estimation by using more than one image, assuming, of course, that the lens parameters are constant.