Eigenvalues and eigenvectors tell you which directions a square matrix only scales instead of turning. For a square matrix AA, an eigenvector is a nonzero vector vv such that

Av=λvAv = \lambda v

for some scalar λ\lambda. The number λ\lambda is the eigenvalue. If you only need the core idea, it is this: eigenvectors keep their line, while eigenvalues tell you the scale factor on that line.

Most vectors change direction under a matrix. An eigenvector does not. It may get stretched, shrunk, or reversed if λ<0\lambda < 0, but it stays on the same line.

What Av=λvAv = \lambda v means

Think of a matrix as a transformation. Usually it rotates, shears, stretches, or mixes directions. But some directions can survive that transformation without turning away from their original line.

Those special directions are the eigenvectors. The eigenvalue tells you what the matrix does along that direction:

  • If λ>1\lambda > 1, the vector is stretched.
  • If 0<λ<10 < \lambda < 1, the vector is shrunk.
  • If λ<0\lambda < 0, the vector is scaled and reversed.
  • If λ=0\lambda = 0, the matrix sends that eigenvector to the zero vector.

The zero vector is never counted as an eigenvector. If it were allowed, every matrix would have it, and the idea would lose its meaning.

How to find eigenvalues and eigenvectors

Start with

Av=λv.Av = \lambda v.

Move everything to one side:

(AλI)v=0.(A - \lambda I)v = 0.

This is a homogeneous system. For it to have a nonzero solution vv, the matrix AλIA - \lambda I must be singular, so

det(AλI)=0.\det(A - \lambda I) = 0.

Solving that equation gives the eigenvalues. Then, for each eigenvalue, solve

(AλI)v=0(A - \lambda I)v = 0

to get the corresponding eigenvectors.

This method applies to square matrices. If the matrix is not square, the standard eigenvalue problem is not defined in this form.

Worked example: a 2x2 matrix

Let

A=[2103].A = \begin{bmatrix} 2 & 1 \\ 0 & 3 \end{bmatrix}.

We will find the eigenvalues first, then the eigenvectors that go with them.

Step 1: Compute det(AλI)\det(A - \lambda I)

First form

AλI=[2λ103λ].A - \lambda I = \begin{bmatrix} 2 - \lambda & 1 \\ 0 & 3 - \lambda \end{bmatrix}.

Now take the determinant:

det(AλI)=(2λ)(3λ).\det(A - \lambda I) = (2 - \lambda)(3 - \lambda).

Set the determinant equal to zero:

(2λ)(3λ)=0.(2 - \lambda)(3 - \lambda) = 0.

So the eigenvalues are

λ=2andλ=3.\lambda = 2 \quad \text{and} \quad \lambda = 3.

Step 2: Find the eigenvectors for λ=2\lambda = 2

Substitute λ=2\lambda = 2 into (AλI)v=0(A - \lambda I)v = 0:

A2I=[0101].A - 2I = \begin{bmatrix} 0 & 1 \\ 0 & 1 \end{bmatrix}.

Let v=[xy]v = \begin{bmatrix} x \\ y \end{bmatrix}. Then

[0101][xy]=[00]\begin{bmatrix} 0 & 1 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}

which gives y=0y = 0. The variable xx is free, so every nonzero vector on the xx-axis works. A simple choice is

[10]\begin{bmatrix} 1 \\ 0 \end{bmatrix}

and any nonzero multiple of it is also an eigenvector for λ=2\lambda = 2.

Step 3: Find the eigenvectors for λ=3\lambda = 3

Now use λ=3\lambda = 3:

A3I=[1100].A - 3I = \begin{bmatrix} -1 & 1 \\ 0 & 0 \end{bmatrix}.

Solve

[1100][xy]=[00].\begin{bmatrix} -1 & 1 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}.

This gives x+y=0-x + y = 0, so y=xy = x. A simple choice is

[11]\begin{bmatrix} 1 \\ 1 \end{bmatrix}

and any nonzero multiple of it is also an eigenvector for λ=3\lambda = 3.

Step 4: Check one pair

Take v=[11]v = \begin{bmatrix} 1 \\ 1 \end{bmatrix} for λ=3\lambda = 3:

Av=[2103][11]=[33]=3[11].Av = \begin{bmatrix} 2 & 1 \\ 0 & 3 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 3 \\ 3 \end{bmatrix} = 3 \begin{bmatrix} 1 \\ 1 \end{bmatrix}.

So the check works: Av=3vAv = 3v.

Intuition: why these vectors are special

If you picture the plane being transformed by AA, most arrows tilt into new directions. Eigenvectors are the rare arrows that stay on their own line.

That is why they matter. They reveal the simple directions hidden inside the transformation, which is often more useful than staring at the matrix entries.

Common mistakes when solving for eigenvalues and eigenvectors

  1. Forgetting that eigenvectors must be nonzero.
  2. Solving det(AλI)=0\det(A - \lambda I) = 0 incorrectly, especially the determinant step.
  3. Finding the eigenvalues but not solving for the corresponding eigenvectors.
  4. Assuming every square matrix has enough independent eigenvectors to form a basis. Some do not.
  5. Assuming every real matrix has real eigenvalues. That depends on the matrix.

Where eigenvalues and eigenvectors are used

They show up whenever a linear process has preferred directions or natural modes.

Common examples include differential equations, vibration problems, dynamical systems, Markov models, and principal component analysis. The meaning changes by field, but the pattern is the same: find directions where the transformation acts like simple scaling.

Try a similar problem

Try the same process for

[4011].\begin{bmatrix} 4 & 0 \\ 1 & 1 \end{bmatrix}.

Find the eigenvalues first, then solve for the eigenvectors, and check one pair by direct multiplication. If you want to go one step further, try your own version in a solver and compare the eigenpairs, not just the final numbers.

Need help with a problem?

Upload your question and get a verified, step-by-step solution in seconds.

Open GPAI Solver →