Search code examples
pythonopencvimage-processingedge-detectioncanny-operator

Should OpenCV Python Canny edge detection be giving me very different results depending on image size?


I am importing an image from a video frame, using cv2.resize() to enlarge the image by 4X, and then using Canny edge detection to help remove some noise before doing some object tracking. However Canny edge detection kept giving me a black image.

After much testing I found that using cv2.resize() to reduce the image size to 1/4th before Canny edge detection gave me the result I was hoping for. Reducing image size to a 1/3rd also gave me a much better result but had fewer edges than the 1/4th reduction, and scaling down the image to 1/16th gave more edges than scaling to 1/4th. Why would this be happening? Actually while writing this question I was resizing the unscaled result and I found that calling namedWindow and cv.WINDOW_NORMAL also improved it.

I realize I can simply rescale down, run Canny detection, and then enlarge the result of the Canny edge detection and do my object tracking, but this is baffling me and knowing why this is happening would be of interest to myself and I think to others as well. Nothing I could find in the opencv docs suggested a dependence of the Canny algorithm on image size.

See images below, all generated by output = cv2.Canny(input, 30, 50):

Unscaled (improved by using cv.WINDOW_NORMAL) https://i.sstatic.net/y4CCA.png

1/4 Reduced Before Canny Detection https://i.sstatic.net/YkX0t.png

1/3 Reduced Before Canny Detection https://i.sstatic.net/tGtn4.png

1/16 Reduced before Canny Detection https://i.sstatic.net/4l7Qi.png


Solution

  • By resizing you change the size of the features. But as you don't change the filter size, the results differ. You are actually exploring scale-space.

    Also note that the resize function doesn't prefilter the image and causes aliasing.