Search code examples
matlabnumerical-methods

Matlab SOR Method Implementation


Using an initial approximation of a zero vector and not considering tolerance I have shorten the code to only require 4 arguments. Such that x1 always equals c, and so on by the equation x(k+1)=x(k)T+c.

However the code doesn't seem to produce the correct approximations that you would expect. Does anyone notice where I messed up? Assuming DLU_decomposition(A) returns the correct matrices.

function x = sor2(A,b,omega,kmax)
[D,L,U] = DLU_decomposition(A);
T=inv(D-omega*L)*(((1-omega)*D)+(omega*U));
c= (omega*inv(D-omega*L))*b;

for k=1:kmax,

    if(k==1),
        x=c;
    end
    x=T*x+c;

end
norm(A*x-b)
end

Solution

  • Well I can guess all the confusion comes maybe from the multiplications. You need to calculate the matrices elementwise --> use .* instead of the normal *. Would that deliver the correct approximations?