Search code examples
matlabimage-processingdetectionestimation

Two Dimensional Least Mean Square filter in Image Processing (Background Estimation)


I wrote a Matlab program to achieve the image background estimation using the two dimensional LMS (TDLMS)adaptive algorithm, according to Mohiy M. Hadhoud's paper. I initiated the weight matrix W, the estimated output matrix Y and the error matrix e with zeros. The support region is 5*5 (window size). Matrix D is the desired output, whose difference with Y is defined as the error matrix(e). However, after I ran the program, weight W and estimated output Y are all zeros. I don't know if it's because W and Y are all zeros at the beginning or there's flaw in the program. Here is my code:

clear; close all;
X=imread('noisySea.jpg');
[M N]=size(X);
Ns=5; % 5*5 support region
u=5*10^(-8); % step size
Y=zeros(M,N); % predicted image
Y(1:Ns,1:Ns)=X(1:Ns,1:Ns);
D=zeros(M,N);
D(2:M,2:N)=X(2:M,2:N); % D is shifted version of X
e=zeros(M,N); % error matrix
W=zeros(Ns,Ns); % weight matrix
for m=1+floor(Ns/2):M-floor(Ns/2)
    for n=1+floor(Ns/2):N-floor(Ns/2)
        for l=1:Ns
            for k=1:Ns
                Y(m,n)=Y(m,n)+W(l,k)*X(m-floor(Ns/2)+l-1,n-floor(Ns/2)+k-1);
                e(m,n)=D(m,n)-Y(m,n);
                W(l,k)=W(l,k)+u*e(m,n)*X(m-floor(Ns/2)+l-1,n-floor(Ns/2)+k-1);
            end
        end
    end
end
imshow(Y);

the inner two iterations are used to calculate the value of Y at point (m,n), while the outer two iterations walk through the whole image. Codes such as m=1+floor(Ns/2) are frequently used because the weight matrix (5*5) cannot fit into the image at the edges. Only the pixels whose neighbors can all be included into the weight matrix (or mask) are filtered.


Solution

  • OK, a few more things. I think you're referring to Hadhoud and Thomas' paper in the May 1988 issue of IEEE Transactions on Circuits and Systems, Vol 35. No. 5. I see at least a couple more errors but I only read the paper over a few minutes.

    D(2:M,2:N)=X(2:M,2:N); % D is shifted version of X
    

    You're not actually doing a shift here, in the paper there's a shift by 1 pixel in the X and Y directions.

    D(2:M,2:N)=X(1:M-1,1:N-1); % D is a shiftier version of X
    

    They also initialize the weights, W, by running the algorithm over 10 rows of the original image starting with a 0 weight matrix.

    I also believe (not 100% certain yet) that your weight udpate is incorrect. Your weights are continuously updating which introduces some strange smearing into the algorithm. From my brief read of the paper the weights are only updated when your n loop iterates. The weights are also scaled so that their total sum is 1, which preserves the local mean value.