Search code examples
clapackblasaccelerate-frameworkmatrix-inverse

Symmetric Matrix Inversion in C using CBLAS/LAPACK


I am writing an algorithm in C that requires Matrix and Vector multiplications. I have a matrix Q (W x W) which is created by multiplying the transpose of a vector J(1 x W) with itself and adding Identity matrix I, scaled using scalar a.

Q = [(J^T) * J + aI].

I then have to multiply the inverse of Q with vector G to get vector M.

M = (Q^(-1)) * G.

I am using cblas and clapack to develop my algorithm. When matrix Q is populated using random numbers (type float) and inverted using the routines sgetrf_ and sgetri_ , the calculated inverse is correct.

But when matrix Q is symmetrical, which is the case when you multiply (J^T) x J, the calculated inverse is wrong!!.

I am aware of the row-major (in C) and column-major (in FORTRAN) format of arrays while calling lapack routines from C, but for a symmetrical matrix this should not be a problem as A^T = A.

I have attached my C function code for matrix inversion below.

I am sure there is a better way to solve this. Can anyone help me with this?

A solution using cblas would be great...

Thanks.

void InverseMatrix_R(float *Matrix, int W)
{   
    int     LDA = W;
    int     IPIV[W];
    int     ERR_INFO;
    int     LWORK = W * W;
    float   Workspace[LWORK];

    // - Compute the LU factorization of a M by N matrix A
    sgetrf_(&W, &W, Matrix, &LDA, IPIV, &ERR_INFO);

    // - Generate inverse of the matrix given its LU decompsotion
    sgetri_(&W, Matrix, &LDA, IPIV, Workspace, &LWORK, &ERR_INFO);

    // - Display the Inverted matrix
    PrintMatrix(Matrix, W, W);

}

void PrintMatrix(float* Matrix, int row, int colm)
{
    int i,k;

    for (i =0; i < row; i++) 
    {
        for (k = 0; k < colm; k++) 
        {
            printf("%g, ",Matrix[i*colm + k]);
        }

        printf("\n");
    }
}

Solution

  • I don't know BLAS or LAPACK, so I have no idea what may cause this behaviour.

    But, for matrices of the given form, calculating the inverse is quite easy. The important fact for this is

    (J^T*J)^2 = (J^T*J)*(J^T*J) = J^T*(J*J^T)*J = <J|J> * (J^T*J)
    

    where <u|v> denotes the inner product (if the components are real - the canonical bilinear form for complex components, but then you'd probably consider not the transpose but the conjugate transpose, and you'd be back at the inner product).

    Generalising,

    (J^T*J)^n = (<J|J>)^(n-1) * (J^T*J), for n >= 1.
    

    Let us denote the symmetric square matrix (J^T*J) by S and the scalar <J|J> by q. Then, for general a != 0 of sufficiently large absolute value (|a| > q):

    (a*I + S)^(-1) = 1/a * (I + a^(-1)*S)^(-1)
                   = 1/a * (I + ∑ (-1)^k * a^(-k) * S^k)
                               k>0
                   = 1/a * (I + (∑ (-1)^k * a^(-k) * q^(k-1)) * S)
                                k>0
                   = 1/a * (I - 1/(a+q)*S)
                   = 1/a*I - 1/(a*(a+q))*S
    

    That formula holds (by analyticity) for all a except a = 0 and a = -q, as can be verified by calculating

    (a*I + S) * (1/a*I - 1/(a*(a+q))*S) = I + 1/a*S - 1/(a+q)*S - 1/(a*(a+q))*S^2
                                        = I + 1/a*S - 1/(a+q)*S - q/(a*(a+q))*S
                                        = I + ((a+q) - a - q)/(a*(a+q))*S
                                        = I
    

    using S^2 = q*S.

    That calculation is also much simpler and more efficient than first finding the LU decomposition.