A Hermitian matrix is a complex square matrix which is equal to its conjugate transpose. Its matrix elements fulfil following condition:
Everytime, I compute eigenvectors of a Hermitian matrix using Python, the first coefficient of the eigenvector is a pure real number. Is this an attribute of Hermitian matrices?
I attach a code snippet to generate a Hermitian matrix, compute its eigenvectors and print the eigenvector corresponding to the lowest eigenvalue.
import numpy as np
from numpy import linalg as LA
N = 5 # Set size of a matrix
# Generate real part of the matrix at first
real_matrix = np.random.uniform(-1.0, 1.0, size=(N,N))
real_matrix = (real_matrix + real_matrix.T)/2
# Generate imaginary part of the matrix
imaginary_matrix = np.random.uniform(-1.0, 1.0, size=(N,N))
imaginary_matrix = (imaginary_matrix + imaginary_matrix.T)/2
imaginary_matrix = imaginary_matrix.astype(complex) * 1j
for row in range(N):
for column in range(row,N):
if row == column:
imaginary_matrix[row][column] = 0.0
else:
imaginary_matrix[row][column] *= -1
# Combine real and imaginary part
matrix = real_matrix + imaginary_matrix
# Compute and print eigenvector
eigenvalues, eigenvectors = LA.eigh(matrix)
print(eigenvectors[:,0])
I think it's a python question rather than a mathematics question.
You have some ambiguity when performing an eigenvalue decomposition: if u is a unitary eigenvector for the eigenvalue lambda, then exp(i theta) * u is also a unitary eigenvector (for any real theta) for the same eigenvalue. To fix this indetermination, some implementation impose that the first coefficient of each eigenvector is real.
You get the same thing when doing an eigendecomposition of a real matrix: if u is an eigenvector, - u also is. To make the eigendecomposition deterministic, some implementation (for example sklearn's PCA, see this related question) impose that the greatest coefficient of u in magnitude be positive.