Search code examples
pythonnumpymatrixlinear-algebraeigenvalue

Given A = S*D*S.T (D is a diagonal matrix , S/S.T an arbitrary nxn matrix) Shouldn't the eigenvalues of A correspond to the diagonal entries of D?


I was asked to write a function that generates a random symmetric positive definite 2D matrix. Here is my attempt:

import numpy as np
from numpy import linalg as la

def random_spd(n):
    """Generates random 2D SPD matrix (symmetric positive definite)"""
    while True:
        S = np.random.rand(n,n)
        if la.matrix_rank(S)==n: #Make sure that S has full rank.
            break
    D = np.diag(np.random.randint(0,10,size=n))
    print(f"S:\n{S}\n\nD:\n{D}\n") #Only for debugging
    return S@D@S.T

A = random_spd(2)
print(f"A:\n{A}\n")
ei_vals, ei_vecs = la.eig(A)
print(f"Eigenvalues:\n{ei_vals}\n\nEigenvectors:\n{ei_vecs}")

Output:

D:
[[6 0]
 [0 5]]

A:
[[1.97478191 1.71620628]
 [1.71620628 2.37372465]]

Eigenvalues:
[0.4464938  3.90201276]

Eigenvectors:
[[-0.74681018 -0.66503726]
 [ 0.66503726 -0.74681018]]

As far as I know, the function works. Now, if I try to calculate the eigenvalues of a randomly generated matrix, shouldn't they be the same as the diagonal entries of the matrix D? Can someone help me understand my misconception or mistake? Thank you very much! Best regards, Max :)


Solution

  • What you are applying is a congruency transform, it preserves definiteness.

    A positive definite matrix P is one that for any (non-null) vector x of shape (N, 1), x.T @ P @ x.

    Now if you replace x = S @ y, in the above condition you get y.T @ S.T @ P @ S @ y, comparing the two you conclude that S.T @ P @ S is positive definite as well (semidefinite positive if S is not full-rank).

    Similarly the eigenvalues are defined by the equation

    A @ v = lambda * v

    If you replace v = S u the equation you get is

    A @ S @ u = lambda * S @ u

    To place this equation in the same form as the eigenvalues equatios, left-multiply the equation for inv(S)

    (inv(S) @ A @ S) @ u = lambda * u

    We say that the matrix obtained this way inv(S) @ A @ S is similar to A, and we call this a similarity transformation.

    There are simpler ways to do create a positive definite matrix. One simple way

    S = np.random.rand(n,n)
    A = S.T @ S + eps * np.eye(n)
    

    S.T @ S can be seen as a congruency transform of the identity matrix, thus positive semidefinite, adding eps * eye(n) will ensure that all eigen values are greater than eps. No matrix inversions, no eigen decomposition.