Search code examples
matlabimage-processingedge-detection

How to detect edges of a colored photo?


I'm trying to implement the blind deconvolution algorithm example from Mathworks' site and when making edge detection I'm having problems because you can't use edge detection functions with RGB photos. So I converted the photo to YUV but after this I don't know in what order I should do processes and I don't even know if I am using the right method.

I applied the edge() function for all three (y,u,v) then I used YUV to RGB method to combine them again. It didn't work, I can't obtain the final WEIGHT value.

My code is below and an example link is at http://www.mathworks.com/help/images/deblurring-with-the-blind-deconvolution-algorithm.html.

Img = imread('image.tif');
PSF = fspecial('motion',13,45); 
Blurred = imfilter(Img,PSF,'circ','conv'); 
INITPSF = ones(size(PSF));
[J P] = deconvblind(Blurred,INITPSF,30);

%   RGB to YUV
R=Img(:,:,1); G=Img(:,:,2); B=Img(:,:,3);
Y=round((R+2*G+B)/4);
U=R-G;
V=B-G;

% finding edges for Y,U,V
WEIGHT1 = edge(Y,'sobel',.28);
se1 = strel('disk',1);
se2 = strel('line',13,45);
WEIGHT1 = ~imdilate(WEIGHT1,[se1 se2]);
WEIGHT1 = padarray(WEIGHT1(2:end-1,2:end-1),[1 1]);

WEIGHT2 = edge(U,'sobel',.28);
se1 = strel('disk',1);
se2 = strel('line',13,45);
WEIGHT2 = ~imdilate(WEIGHT2,[se1 se2]);
WEIGHT2 = padarray(WEIGHT2(2:end-1,2:end-1),[1 1]);

WEIGHT3  = edge(V,'sobel',.28);
se1 = strel('disk',1);
se2 = strel('line',13,45);
WEIGHT3 = ~imdilate(WEIGHT3,[se1 se2]);
WEIGHT3 = padarray(WEIGHT3(2:end-1,2:end-1),[1 1]);


% YUV to RGB again
G=round((WEIGHT1-(WEIGHT2+WEIGHT3)/4));
R=WEIGHT2+G;
B=WEIGHT3+G;
WEIGHT(:,:,1)=G; WEIGHT(:,:,2)=R; WEIGHT(:,:,3)=B;

P1 = P;
P1(find(P1 < 0.01))= 0;

[J2 P2] = deconvblind(Blurred,P1,50,[],double(WEIGHT));
figure, imshow(J2)
title('Newly Deblurred Image');
figure, imshow(P2,[],'InitialMagnification','fit')
title('Newly Reconstructed PSF')  

Solution

  • I'll not get into the deconvblind de-blurring here, but let me show you how edge detection for color images can work.

    % load an image
    I = imread('peppers.png');
    

    original peppers image

    % note that this is a RGB image. 
    e = edge(I, 'sobel');  
    

    will fail, because edge wants a 2D image, and an RGB or a YUV image is 3D, in the sense that the third dimension is the color channel.

    There are a few ways of fixing this. One is to convert the image to grayscale, using

    gray = rgb2gray(I);
    

    This can then be passed into edge, to return edges based on the gray level intensities in 'gray'.

    e = edge(gray,'sobel'); % also try with different thresholds for sobel.
    

    edge computed using grayscale

    If you are really interested in the edge information in each channel, you could simply pass in the individual channels into edge separately. For example,

    eRed = edge(I(:,:,1), 'sobel'); % edges only in the I(:,:,1): red channel.
    eGreen = edge(I(:,:,2), 'sobel');
    eBlue = edge(I(:,:,3), 'sobel');
    

    and then based on how each of these eRed, eGreen and eBlue look, you could potentially combine then using the logical 'or', such that the result is an edge if any of the channels independently think it is an edge.

    eCombined = eRed | eGreen | eBlue;
    

    edge computed with independent RGB channels

    What you did originally is probably unintended, as the YUV colorspace can distort the sense of edges. An edge in the R plane may not be an edge in the Y, U or V plane, and hence you'd need to make sure to use the right colorspace to detect edges, so that you can combine them after, like shown with the RGB colorspace earlier.