On Jan 23, 1913 (exactly 100 years ago this week), a bombastic Russian mathematician by the name of Andrey Markov stepped up to a podium and delivered a modestly-received lecture that was destined to reshape the scientific world. Little heralded in its day, the concept of Markov Chains has become fundamental to all of modern-day science, statistics, and scientific computing.
The mnemonic that introduced me to Markov’s ideas was the “Monopoly Chain”: the probability of where you will end up on your next move depends on what space you’re on right now (and so on). This simple notion about the interdependency of random events turns out to be extremely powerful in its ability to accurately describe and predict just about every complex process in the universe. As a core abstraction it is essential in the science of computational simulations. As a Harvard researcher observed,
Any attempt to simulate probable events based on vast amounts of data — the weather, a Google search, the behavior of liquids, natural language processing, computer vision and AI — relies on Markov’s idea.
So in honor of this momentous Markovian moment, I thought I would curate a few lunch-time reading links to refresh your memory…
- For a little taste of the back-story, a Harvard journalist describes “An idea that changed the world”
- If you can’t quite remember the gist of the concept, here’s a great 2-minute introduction to Markov chains (and Monte Carlo to boot; courtesy of Stack Exchange).
- Or, for a deeper presentation of the subject, I might recommend “An Introduction to MCMC for Machine Learning” (Andrieu, et.al., 2003), or the more recent “Introduction to Markov Chain Monte Carlo” (Geyer, 2011).