Overview
Eigenvalue decomposition reveals the fundamental modes and scaling factors of linear transformations. NumPy provides functions for both general and specialized (Hermitian/symmetric) matrices.
General Matrices
numpy.linalg.eig
Signature: eig(a)
Compute eigenvalues and right eigenvectors of a square array.
import numpy as np
from numpy import linalg as LA
# General matrix
A = np.array([[ 1 , - 1 ],
[ 1 , 1 ]])
eigenvalues, eigenvectors = LA .eig(A)
print (eigenvalues)
# [1.+1.j 1.-1.j]
print (eigenvectors)
# [[0.70710678+0.j 0.70710678-0.j ]
# [0. -0.70710678j 0. +0.70710678j]]
# Verify: A @ v = λ * v
for i in range ( 2 ):
lambda_i = eigenvalues[i]
v_i = eigenvectors[:, i]
print (np.allclose(A @ v_i, lambda_i * v_i)) # True
Parameters:
a : (..., M, M) array_like - Square matrix
Returns: EigResult namedtuple with:
eigenvalues : (..., M) ndarray - Eigenvalues (not necessarily ordered)
eigenvectors : (..., M, M) ndarray - Normalized eigenvectors in columns
Raises: LinAlgError - If eigenvalue computation does not converge
Complex Eigenvalues
Real matrices can have complex eigenvalues:
import numpy as np
# Rotation matrix - eigenvalues are complex
theta = np.pi / 4
R = np.array([[np.cos(theta), - np.sin(theta)],
[np.sin(theta), np.cos(theta)]])
eigenvalues, eigenvectors = np.linalg.eig(R)
print (eigenvalues)
# [0.70710678+0.70710678j 0.70710678-0.70710678j]
numpy.linalg.eigvals
Signature: eigvals(a)
Compute eigenvalues only (no eigenvectors).
import numpy as np
A = np.diag([ 1 , 2 , 3 ])
w = np.linalg.eigvals(A)
print (w) # [1.+0.j 2.+0.j 3.+0.j]
Use when: Only eigenvalues are needed - more efficient than computing both.
Hermitian/Symmetric Matrices
Hermitian (complex) and symmetric (real) matrices have real eigenvalues and orthogonal eigenvectors. Specialized functions exploit this structure for better performance.
numpy.linalg.eigh
Signature: eigh(a, UPLO='L')
Return eigenvalues and eigenvectors of a Hermitian or symmetric matrix.
import numpy as np
from numpy import linalg as LA
# Hermitian matrix
A = np.array([[ 1 , - 2 j ],
[ 2 j , 5 ]])
eigenvalues, eigenvectors = LA .eigh(A)
print (eigenvalues)
# [0.17157288 5.82842712] # Always real!
print (eigenvectors)
# [[-0.92387953+0.j -0.38268343+0.j ]
# [ 0. +0.38268343j 0. -0.92387953j]]
# Verify orthonormality
print (np.allclose(eigenvectors.conj().T @ eigenvectors, np.eye( 2 ))) # True
Parameters:
a : (..., M, M) array_like - Hermitian/symmetric matrix
UPLO : {'L', 'U'} - Whether to use lower (‘L’, default) or upper (‘U’) triangle
Returns: EighResult namedtuple with:
eigenvalues : (..., M) ndarray - Real eigenvalues in ascending order
eigenvectors : (..., M, M) ndarray - Orthonormal eigenvectors in columns
Raises: LinAlgError - If eigenvalue computation does not converge
Only the specified triangle (lower or upper) is used. Imaginary parts of the diagonal are ignored.
UPLO Parameter
import numpy as np
A = np.array([[ 5 , 9 ],
[ 0 , 2 ]])
# Use upper triangle: effectively [[5, 9], [9, 2]]
w_upper = np.linalg.eigh(A, UPLO = 'U' )
# Use lower triangle: effectively [[5, 0], [0, 2]]
w_lower = np.linalg.eigh(A, UPLO = 'L' )
print (w_upper[ 0 ]) # [-1.39017344 8.39017344]
print (w_lower[ 0 ]) # [2. 5.]
numpy.linalg.eigvalsh
Signature: eigvalsh(a, UPLO='L')
Compute eigenvalues of a Hermitian/symmetric matrix (no eigenvectors).
import numpy as np
A = np.array([[ 1 , - 2 j ],
[ 2 j , 5 ]])
w = np.linalg.eigvalsh(A)
print (w) # [0.17157288 5.82842712]
Use when: Only eigenvalues are needed for Hermitian matrices.
Use Cases
Principal Component Analysis Use eigh on covariance matrix to find principal components
Stability Analysis Check if all eigenvalues have negative real part (stable system)
Quantum Mechanics Eigenvalues are energy levels; eigenvectors are quantum states
Graph Analysis Eigenvalues of Laplacian matrix reveal graph structure
Vibration Modes Eigenvalues are natural frequencies; eigenvectors are mode shapes
Markov Chains Eigenvalue 1 corresponds to stationary distribution
Practical Examples
Principal Component Analysis
import numpy as np
# Data matrix: rows are samples, columns are features
X = np.random.randn( 100 , 5 )
# Center the data
X_centered = X - X.mean( axis = 0 )
# Covariance matrix
C = X_centered.T @ X_centered / (X.shape[ 0 ] - 1 )
# Eigendecomposition
eigenvalues, eigenvectors = np.linalg.eigh(C)
# Sort in descending order
idx = eigenvalues.argsort()[:: - 1 ]
eigenvalues = eigenvalues[idx]
eigenvectors = eigenvectors[:, idx]
# Principal components
print ( "Variance explained:" , eigenvalues)
print ( "First PC direction:" , eigenvectors[:, 0 ])
# Project data onto first 2 PCs
X_pca = X_centered @ eigenvectors[:, : 2 ]
print (X_pca.shape) # (100, 2)
Power Iteration
Find dominant eigenvalue and eigenvector:
import numpy as np
def power_iteration ( A , num_iterations = 100 ):
# Start with random vector
v = np.random.randn(A.shape[ 0 ])
for _ in range (num_iterations):
# Multiply by A and normalize
v = A @ v
v = v / np.linalg.norm(v)
# Rayleigh quotient for eigenvalue
eigenvalue = v @ (A @ v) / (v @ v)
return eigenvalue, v
A = np.array([[ 4 , 1 ], [ 2 , 3 ]])
lambda_max, v_max = power_iteration(A)
print ( f "Dominant eigenvalue: { lambda_max :.4f} " )
print ( f "Dominant eigenvector: { v_max } " )
# Verify
eigenvalues, eigenvectors = np.linalg.eig(A)
print ( f "True dominant eigenvalue: { max (eigenvalues) :.4f} " )
Diagonalization
import numpy as np
# Diagonalize a matrix: A = V D V^(-1)
A = np.array([[ 1 , 2 ],
[ 2 , 1 ]])
eigenvalues, V = np.linalg.eig(A)
D = np.diag(eigenvalues)
# Verify diagonalization
A_reconstructed = V @ D @ np.linalg.inv(V)
print (np.allclose(A, A_reconstructed)) # True
# Matrix powers via diagonalization: A^n = V D^n V^(-1)
n = 10
A_power = V @ np.diag(eigenvalues ** n) @ np.linalg.inv(V)
print (np.allclose(A_power, np.linalg.matrix_power(A, n))) # True
Function Comparison
Function Matrix Type Returns Eigenvalues Performance eigGeneral Values + Vectors Complex Standard eigvalsGeneral Values only Complex Faster eighHermitian Values + Vectors Real, sorted Fast eigvalshHermitian Values only Real, sorted Fastest
Key Properties
General Matrices (eig)
Eigenvalues may be complex, even for real matrices
Eigenvectors may not be orthogonal
No guaranteed ordering of eigenvalues
For real matrices, complex eigenvalues come in conjugate pairs
Hermitian Matrices (eigh)
Eigenvalues are always real
Eigenvectors are orthonormal
Eigenvalues returned in ascending order
More numerically stable and efficient
For real symmetric matrices, all computations stay real
Numerical Considerations
Eigenvalue computation can be sensitive to:
Matrix conditioning (use np.linalg.cond to check)
Floating-point precision
Nearly repeated eigenvalues
Very large or small matrix entries
Checking Condition Number
import numpy as np
A = np.array([[ 1 , 1000 ],
[ 0 , 1 ]])
cond = np.linalg.cond(A)
print ( f "Condition number: { cond :.2e} " )
if cond > 1e12 :
print ( "Warning: Matrix is ill-conditioned" )
print ( "Eigenvalues may be inaccurate" )
Use eigh/eigvalsh for symmetric/Hermitian matrices - much faster
Use eigvals/eigvalsh when eigenvectors aren’t needed
For very large matrices, consider iterative methods (not in NumPy, see SciPy)
Ensure matrices are well-conditioned for accurate results