The Eigenvector of matrix A is a vector represented by a matrix X in which the resultant matrix has the same direction as vector X when multiplied by matrix A.
The above sentence can be expressed mathematically as:
AX = λX
Here, X is an Eigenvector corresponding to each Eigenvalue, A is any arbitrary matrix, and λ are Eigenvalues.
Any square matrix’s Eigenvectors and Eigenvalues can be found using this method.
It is quite obvious,
AX = λX
=> AX – λX = 0
=> (A – λI) X = 0 …. (1)
The aforementioned query will be called factual only if (A – λI) is singular. As a result, it could be said that
|A – λI|is equal to 0 …. (2)
(2) is known as the characteristic equation of the matrix.
The Eigenvalues of matrix A are the roots of the characteristic equation.
To determine the Eigenvectors, simply convert the augmented matrix (A – λI) = 0 to row echelon form and solve the linear system of equations that results.
Simply convert the augmented matrix (A – λI) = 0 to row echelon form and solve the linear system of equations that emerges to find the Eigenvectors.
Some Important Properties of Eigenvalues
Here is the list of the important properties of Eigenvalues:
Real Eigenvalues are found in symmetric and Hermitian matrices.
In skew Hermitian and real skew-symmetric matrices, the Eigenvalues are either pure imaginary or zero.
Unit modulus Eigenvalues are found in unitary and orthogonal matrices |λ| = 1
If λ1, λ2…….λn are the Eigenvalues of A, then kλ1, kλ2…….kλn are Eigenvalues of kA
If λ1, λ2…….λn are the Eigenvalues of A, then 1/λ1, 1/λ2…….1/λn are Eigenvalues of A-1
If λ1, λ2…….λn are the Eigenvalues of A, then λ1k, λ2k…….λnk are Eigenvalues of Ak
Eigenvalues of A is equal to EigenValues of AT (Transpose)
The Trace of A is equal to the Sum of EigenValues (Sum of diagonal elements of A)
|A| is the product of EigenValues.
The Size of A equals the greatest number of different Eigenvalues.
If A and B are two matrices of the same order, Eigenvalues of AB = Eigenvalues of BA.
Properties of the Eigenvalues Properties
Property 1(a): The total of a matrix’s Eigenvalues is equal to the sum of its primary (main) diagonal members.
(or)
The trace of a matrix is equal to the sum of the Eigenvalues of the matrix.
Property 1(b): The product of the Eigenvalues is equal to the determinant of the matrix.
Explanation:
It’s time to consider A to be a square matrix of order 𝑛.
The characteristic equation of A is |𝐴 – 𝜆𝐼| = 0
(𝑖.𝑒. )𝜆 𝑛 − 𝑆1𝜆 n-1 + 𝑆2𝜆 n-2 − ⋯ + (−1)𝑆𝑛 = 0 … (1)
Here, S1 is equal to the sum of the diagonal elements of A.
. . .
. . .
. . . Sn = determinant of A. We understand the roots of the characteristic equation are designated Eigenvalues of the given matrix.
Property 2: A square matrix A and its transpose 𝐀 𝐓 have the same Eigenvalues.
(or) A square matrix A and its transpose 𝐀 𝐓 have similar feature values.
Proof:
First, it’s time to imagine A to be a square matrix of order 𝑛.
The characteristic equation of A and A T are:
|A − λI| = 0 … … . (1)
and |A T − λI| = 0 … … . (2)
We have to prove (1) and (2) are the same
We know that |X|=|XT| where X is a square matrix
So |A − λI|=|(A-λI)T|=|A T − λI|
∴ The Eigenvalues of A and A T are the same.
Property 3: A triangular matrix’s characteristic roots are simply the matrix’s diagonal members.
(or) A triangular matrix’s Eigenvalues are just the diagonal elements of the matrix.
Property 4: If 𝝀 is an Eigenvalue of a matrix A, then 𝝀-1 , (𝝀 ≠ 𝟎) is the Eigenvalue of 𝐀 −𝟏.
(or) If 𝝀 is an Eigenvalue of a matrix A, what can you say about the Eigenvalue of matrix 𝐀 −𝟏. Prove your statement.
If X be the Eigenvector corresponding to 𝜆,
then 𝐴𝑋 = 𝜆𝑋 … (i)
Pre multiplying both sides by A −1, we get
A −1AX = A −1λX
(1) ⇒ X = λA −1X
X = λA −1X ÷ λ ⇒ 1 λ
X = A −1X
(𝑖.𝑒. ) A −1X = λ-1 X
This being of the same form as (i), shows that 1/𝜆 is an Eigenvalue of the inverse matrix A −1.
Conclusion
Eigenvalues, also known as characteristic values or characteristic roots, are a type of Eigenvalue. Knowledge of Eigenvalues and how to calculate them is crucial in fields such as physics and engineering. A vector space is Mn. Applying a metric to this vector space is handy. The causes are several, ranging from general information of a metrized system to perturbation theory, which requires measuring the “smallness” of a matrix. As a result, it’s important to create matrix norms, which are standard norms with an additional feature related to the matrix product.