Search code examples
opencvcameracomputer-visionobject-recognitionorb

ORB - object needs to be very close to camera


I have a program that takes a video feed from RSTP and checks for an object. The only problem is that the object needs to be about 6" from the camera but when I use a wired webcam the object can be a few feet away. Both camera are transmitting at the same resolution, what is causing this problem?

Camera transmission specs:

    Resolution: 640 * 480 
    FPS: 20 
    Bitrate: 500000
    Focal Length: 2.8mm

EDIT: The algorithm I am using is the OpenCV ORB algorithm but I have also seen this behavior when previously using the Haar classifier method in OpenCV.

Below is the limit at which the webcam can no longer detect the object. (approx. 66 pixels) Web cam image limit

Below is the limit that Glass can no longer detect the object. (approx. 68 pixels) Glass image limit

Looking at the image it looks like the distance is similar but the distance is at least twice that in the webcam image, which looks to me like it is a camera property that is causing this issue? if so what part of the camera would be responsible for causing this?


Solution

  • As you've recognized yourself, the object sizes are very similar in both images, so the algorithm seems to stop for a certain object resolution.

    The difference in distance between both cameras (for the same object size) comes from camera intrinsic parameters like focal length (coming from the lens objective) and the size of the sensor chip.

    Depending on the method you used to detect the object, you could resize (upscale) the second image, unless this leads to too many interpolation artifacts (which might not be handable by your detection method).

    Upscaling the image is ok for many detectors that have some minimum object size, directly coming from the training data or training window size. Upscaling might lead to additonal (drastical) speed performance increase.

    If intrinsic parameters of both cameras are known and the images are undistorted already, you can compute the scale factor between both images, which is:

    ratioX = fx1/fx2
    ratioY = fy1/fy2
    

    if you want to upscale the 2nd image and fx1,fy1 are the focal length values of the first image. You could crop the upscaled image afterwards, centered around the principal point. After that, both image regions should match quite well.

    Hope this helps and good luck.

    edit: you could use cv::undistort function to let an image look like it had another camera matrix, for testing.