A Markov chain is a model for a system that moves between states step by step, such as Sunny and Rainy. The key rule is that the next step depends only on the current state, if that is a reasonable assumption for the system you are modeling.

Those one-step probabilities are collected in a transition matrix. If the process is in state ii now and moves to state jj next with probability PijP_{ij}, then

P=(Pij)P = (P_{ij})

For a finite Markov chain, each row of PP sums to 11 because the process must go to one of the allowed next states.

What The Markov Property Means

The formal idea is

P(Xn+1=jXn=i,Xn1,,X0)=P(Xn+1=jXn=i)P(X_{n+1} = j \mid X_n = i, X_{n-1}, \ldots, X_0) = P(X_{n+1} = j \mid X_n = i)

This says that once you know the current state Xn=iX_n = i, older history does not change the next-step probability in the model.

That condition matters. Some real systems have memory, trends, or delayed effects, so a Markov chain is only a good fit when "current state is enough" is a reasonable approximation.

How To Read A Transition Matrix

Suppose a simple weather model has two states:

  • Sunny
  • Rainy

Use this transition matrix:

P=[0.80.20.40.6]P = \begin{bmatrix} 0.8 & 0.2 \\ 0.4 & 0.6 \end{bmatrix}

Read each row as the current state and each column as the next state.

So if today is Sunny, the model says tomorrow is Sunny with probability 0.80.8 and Rainy with probability 0.20.2. If today is Rainy, tomorrow is Sunny with probability 0.40.4 and Rainy with probability 0.60.6.

Worked Example: Weather Over Two Days

Suppose today's distribution is

v0=[10]\mathbf{v}_0 = \begin{bmatrix} 1 & 0 \end{bmatrix}

This means the model starts in Sunny with probability 11.

The distribution tomorrow is

v1=v0P=[10][0.80.20.40.6]=[0.80.2]\mathbf{v}_1 = \mathbf{v}_0 P = \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 0.8 & 0.2 \\ 0.4 & 0.6 \end{bmatrix} = \begin{bmatrix} 0.8 & 0.2 \end{bmatrix}

So after one step, the model gives an 80%80\% chance of Sunny and a 20%20\% chance of Rainy.

After one more step,

v2=v1P=[0.80.2][0.80.20.40.6]=[0.720.28]\mathbf{v}_2 = \mathbf{v}_1 P = \begin{bmatrix} 0.8 & 0.2 \end{bmatrix} \begin{bmatrix} 0.8 & 0.2 \\ 0.4 & 0.6 \end{bmatrix} = \begin{bmatrix} 0.72 & 0.28 \end{bmatrix}

Now the Sunny probability is 0.720.72 and the Rainy probability is 0.280.28.

The point is not just the arithmetic. The matrix updates the whole probability distribution one step at a time, which is why Markov chains are useful for repeated processes.

Where Markov Chains Are Used

Markov chains are useful when a system changes in stages and you want probabilities for what happens next.

Common examples include weather models, board-game movement, queueing models, and simplified web navigation. In each case, the model only helps if the states are chosen well and the transition probabilities are realistic.

Common Markov Chain Mistakes

Treating Any Random Process As Markov

A process is not automatically a Markov chain just because it is random. The model needs the next-step behavior to be determined by the current state in the way you defined the states.

Forgetting What Rows Mean

People often mix up rows and columns. You need a consistent convention. On this page, rows are current states and columns are next states.

Using Invalid Probabilities

Each entry must be between 00 and 11, and each row must sum to 11 for a standard transition matrix of a finite Markov chain.

Assuming The Model Predicts One Certain Future

A Markov chain usually gives probabilities, not certainty. Even if one state is more likely, several next states may still be possible.

Long-Run Behavior Depends On The Chain

Some Markov chains settle toward a stable long-run distribution, often called a stationary distribution. But that does not happen in every chain, and the details depend on properties of the chain such as how states communicate and whether the movement pattern is periodic.

So it is fine to think of repeated multiplication by PP as a way to study long-run behavior, but you should not assume convergence without checking the conditions.

When A Markov Chain Is A Good Model

Use a Markov chain when all of these are reasonably true:

  • The process can be described by a manageable set of states.
  • Time moves in discrete steps, or you have chosen to model it that way.
  • Next-step probabilities are meaningfully determined by the current state.

If those conditions fail, the model may still be a rough approximation, but you should say that explicitly.

Try Your Own Version

Build a three-state model such as Low, Medium, and High demand. Choose row probabilities that each sum to 11, pick an initial distribution, and compute the next step with vn+1=vnP\mathbf{v}_{n+1} = \mathbf{v}_n P. If you want to go further, try a second update and see whether the distribution starts to settle into a pattern.

Need help with a problem?

Upload your question and get a verified, step-by-step solution in seconds.

Open GPAI Solver →