Search code examples
pythonnumpyscipylinear-algebrasvd

Solving Linear Systems of equations with SVD Decomposition


I want to write a function that uses SVD decomposition to solve a system of equations ax=b, where a is a square matrix and b is a vector of values. The scipy function scipy.linalg.svd() should turn a into the matrices U W V. For U and V, I can simply take the transpose of to find their inverse. But for W the function gives me a 1-D array of values that I need to put down the diagonal of a matrix and then enter one over the value.

def solveSVD(a,b):

    U,s,V=sp.svd(a,compute_uv=True)

    Ui=np.transpose(a)
    Vi=np.transpose(V)

    W=np.diag(s)

    Wi=np.empty(np.shape(W)[0],np.shape(W)[1])
    for i in range(np.shape(Wi)[0]):
        if W[i,i]!=0:
            Wi[i,i]=1/W[i,i]
    
    ai=np.matmul(Ui,np.matmul(Wi,Vi))
    x=np.matmul(ai,b)

    return(x)

However, I get a "TypeError: data type not understood" error. I think part of the issue is that

W=np.diag(s) 

is not producing a square diagonal matrix.

This is my first time working with this library so apologies if I've done something very stupid, but I cannot work out why this line hasn't worked. Thanks all!


Solution

  • To be short, using singular value decomposition let you replace your initial problem which is A x = b by U diag(s) Vh x = b. Using a bit of algebra on the latter, give you the following 3 steps function which is really easy to read :

    import numpy as np
    from scipy.linalg import svd
    
    def solve_svd(A,b,rcond=1e-8):
        # compute svd of A
        U, s, Vh = svd(A)
    
        # filtering numerical zeroes
        m = (abs(s) / np.max(abs(s))) > rcond
        U, s, Vh = U[:,m], s[m], Vh[m, :]
    
        # U diag(s) Vh x = b <=> diag(s) Vh x = U.T b = c
        c = U.T @ b
        # diag(s) Vh x = c <=> Vh x = diag(1/s) c = w (trivial inversion of a diagonal matrix)
        w = np.diag(1/s) @ c
        # Vh x = w <=> x = Vh.H w (where .H stands for hermitian = conjugate transpose)
        x = Vh.conj().T @ w
        return x
    

    Now, let's test it with a square matrix

    A = np.random.random((100,100))
    b = np.random.random((100,1))
    

    to compare it with LU decomposition of np.linalg.solve function

    x_svd = solve_svd(A,b)
    x_lu = np.linalg.solve(A,b)
    

    which gives

    np.allclose(x_lu,x_svd)
    >>> True
    

    In case you have to deal with a rectangular coefficient matrix, you may compare the resulting array with scipy.linalg.lstsq(A,b) which relies on least-squares solution.

    Please, feel free to ask more explanations in comments if needed. Hope this helps.