Lesson 3

Markov chains/stochastic (as required)

<p>Learn about Markov chains/stochastic (as required) in this comprehensive lesson.</p>

AI Explain — Ask anything

Why This Matters

Have you ever wondered how weather forecasters predict if it'll rain tomorrow, based on today's weather? Or how a company might guess if a customer will switch brands? That's where Markov chains come in! They help us understand things that change over time, where the future only depends on the present state, not the whole past. Imagine you're playing a board game, and your next move only depends on where you are right now, not on all the squares you've landed on before. Markov chains are like that – they're a mathematical tool for modelling these kinds of 'memoryless' systems. They're super useful in science, business, and even predicting how things might evolve. In Further Maths, you'll learn how to use these chains to predict probabilities (the chances of something happening) and understand long-term behaviour. It's like having a crystal ball, but one that uses clever maths!

Key Words to Know

01
Markov Chain — A mathematical model where the future state depends only on the current state, not on the past states.
02
State — A specific condition or situation that a system can be in at a given time.
03
Transition Probability — The chance (probability) of moving from one state to another state.
04
Transition Matrix — A square table (matrix) that organises all the transition probabilities between states, with rows typically representing 'from' states and columns representing 'to' states.
05
Stochastic Process — A process that involves randomness or probability in its changes over time.
06
Initial State Vector — A row vector that shows the probabilities or proportions of the system being in each state at the very beginning.
07
Steady State (Equilibrium State) — The long-term distribution of probabilities for each state, where the probabilities no longer change significantly with further transitions.
08
Memoryless Property — The key characteristic of a Markov chain, meaning the future depends only on the present, not on the history leading up to the present.

What Is This? (The Simple Version)

Think of a Markov chain like a game where you move between different 'states' (situations or conditions). The cool thing is, your next move (or state) only depends on where you are right now, not on how you got there. It's like having a short memory!

Imagine you're a chameleon changing colours. You can be green, brown, or grey. If you're currently green, you might have a certain chance of becoming brown next, and another chance of staying green. What colour you were two hours ago doesn't matter for what colour you'll be in the next moment.

These chances of moving from one state to another are called transition probabilities. They tell us how likely it is to 'transition' (move) from one state to another. When we talk about stochastic processes, we're just saying that these changes involve randomness or probability – like rolling a dice, you can't be 100% sure what will happen, but you know the chances.

Real-World Example

Let's imagine a small coffee shop called 'The Daily Grind' and its customers. Customers can either be Loyal (they always come to The Daily Grind) or Switchers (they might go to another coffee shop).

Here's how it works:

  • If a customer is Loyal today, there's an 80% chance they'll be Loyal tomorrow, and a 20% chance they'll become a Switcher (maybe they tried a new place).
  • If a customer is a Switcher today, there's a 40% chance they'll become Loyal tomorrow (they liked The Daily Grind this time!), and a 60% chance they'll remain a Switcher (still trying other places).

This is a Markov chain! The customer's state tomorrow (Loyal or Switcher) only depends on their state today, not on what they did last week. We can use this to predict how many loyal customers the shop might have in the long run.

Representing the Chain (Step by Step)

How do we write down these chances (transition probabilities) in a neat way?

  1. Identify the States: First, figure out all the possible 'situations' or 'conditions' your system can be in. For our coffee shop, the states are 'Loyal' and 'Switcher'.
  2. Draw a Transition Diagram: This is like a map! Draw circles for each state and arrows between them. Each arrow gets a number (a probability) showing the chance of moving from one state to another. Make sure arrows from a state add up to 1 (or 100%).
  3. Create a Transition Matrix: This is a special table (a matrix) that holds all the probabilities. Rows show 'from' states, and columns show 'to' states. Each entry is the probability of moving from the row state to the column state. For our coffee shop, it would be a 2x2 matrix.
  4. Understand Initial State: You often start with a 'distribution' (how things are spread out) of states. For example, 70% Loyal and 30% Switcher today. This is usually written as a row vector.

Predicting the Future (Step by Step)

Once you have your transition matrix, you can predict what happens next!

  1. Multiply by the Matrix: To find the pro...
This section is locked

Common Mistakes (And How to Avoid Them)

Here are some common traps students fall into and how to dodge them:

  • Mixing up rows and columns in the transitio...
This section is locked

2 more sections locked

Upgrade to Starter to unlock all study notes, audio listening, and more.

Exam Tips

  • 1.Always draw a transition diagram first, even if not asked, as it helps visualise the probabilities and build the matrix correctly.
  • 2.Double-check that the probabilities in each *row* of your transition matrix sum to 1. This is a common error point.
  • 3.When calculating future states, remember the order of multiplication: `[state vector] x [transition matrix]` for the next step.
  • 4.For steady state questions, set up the equation `[steady state vector] x [transition matrix] = [steady state vector]` and remember that the elements of the steady state vector must sum to 1.
  • 5.Clearly define your states at the beginning of any problem to avoid confusion.
👋 Ask Aria anything!