If you’re wondering where ergodicity economics comes from, where it sits in the world of science, then this blog post if for you. I won’t go into detail about any of its results. Instead, I will focus on its origins and history. This history is superficial and selective. It is intended to hint at the culture of which ergodicity economics forms one small part.

I assume that you know vaguely what ergodicity means and why it’s important. If you don’t, then I recommend this video Alex Adamou and I made a few years ago.

Since this is a history, let’s do things chronologically. In our case this means we start with the beginnings of probability theory. Quantitative thinking about random events goes back quite a bit, mostly because of gambling, and because of insurance deals: if we have to bet on the outcome of a throw of the dice, or quote someone a fee for insuring a ship, we have to make a quantitative decision; we have to arrive at a dollar amount to bet or quote. In this sense, the story really begins in Babylonian times, but what we would today recognize as probability theory starts in earnest in the 1650s with a famous exchange of letters between Fermat and Pascal.

The correspondence between Fermat and Pascal, discussed in The unfinished game, led to the invention of expected value.

1650

The two mathematicians invented the concept of expected value: the probability-weighted sum of the possible payouts of a gamble. For example, if you toss a fair coin for winning $50 or losing $40, then the expected value is a gain of $5. Their invention led to the first economic model for behavior under uncertainty: expected-value theory. This model says that when faced with a choice with different uncertain outcomes, people choose the option which maximizes expected value. For example, people would say yes to playing the coin toss because its expected value is greater than the expected value associated with not playing (which is $0 because nothing changes if we don’t play).

1710

A few decades later, we could say that Nicolas Bernoulli became the first ergodicity-economist when he challenged the meaningfulness of expected value. He did this in several different ways but most famously by introducing the St. Petersburg gamble. This gamble is defined so that it has a positively infinite expected value. Intriguingly, it’s also a gamble no one would pay very much for, and this second fact meant that the behavioral expected-value model was not a good one. It didn’t capture people’s actual behavior.

Excerpt from Nicolas Bernoulli’s letter where he introduces the St. Petersburg gamble.

1730

The gamble posed a puzzle, or paradox: the world of humans was not as we’d imagined. Economics soon settled on an answer to this problem and declared humans psychologically or cognitively challenged. Apparently, people shied away from profitable opportunities out of what one could only describe as risk aversion — some personal tendency to dislike risk. This is the standard interpretation of Daniel Bernoulli’s 1738 paper on the matter.

1870

For a good 200 years not a whole lot happened in probability theory, until physicists finally woke up to its promise and began to include randomness in their models of the world. These models dealt with systems much simpler than human beings; instead they were about molecules and gases, lifeless simple matter. This simplicity allowed far greater scrutiny. Precise predictions could be made and accurate experiments carried out. It soon brought up a question that hadn’t been asked before: what is the actual ontological meaning of expected value? Sure, it’s an average over imagined possible futures, but is it sometimes also something physical? Is it specifically sometimes the value of a measured quantity averaged over time?

In other words, physicists — Boltzmann, Maxwell, Gibbs and others — asked the ergodicity question. And they answered it: only under very restrictive conditions is the expected value also the time-average value. Boltzmann’s contribution to ergodicity economics is the invention of the ergodicity concept. He didn’t say much about economics, as far as I know.

But very much in the zeitgeist, just around the time when Boltzmann coined the word “ergodicity,” in 1870 the mathematician William Allen Whitworth wrote down a treatment of the general gambling problem, which is precisely in the spirit of ergodicity economics. Whitworth pointed out that a fair gamble when repeated multiplicatively is a losing proposition. Toss a coin many times for ±10% of your evolving wager, and you’re bound to lose about 0.5% per round on average over time.

Excerpt from Whitworth’s book “Choice and chance.” The book also contains a similar treatment of the St. Petersburg paradox.

In 1875, Galton and Watson published a paper which studies a different kind of ergodicity breaking, namely the problem of extinction. The Galton-Watson process has a non-zero probability of continuing forever, and also a non-zero probability of dying out. Expected values in this process mix universes where the process is dead forever with universes where it continues forever, whereas time averages depend on the particular universe one inhabits.

1900

In 1900 Louis Bachelier invented the random walk as a model of stock prices. If x(t) performs a random walk, then increments \Delta x:=x(t+\Delta t)-x(t) are ergodic, but x(t) itself is not. Bachelier’s model, developed in his Ph.D. thesis under supervision of Henri Poincaré, may be the first non-ergodic model in economics.

Also in 1900, David Hilbert presented a list of 23 unsolved problems in mathematics, for the 20th century. Under number 6 in his list he wished for “a rigorous and satisfactory development of the method of mean values in mathematical physics” — in other words: why can we sometimes use expected value in statistical mechanics? This goes to the heart of the ergodicity problem in physics.

Excerpt from David Hilbert’s lecture at the Mathematical Congress in Paris in 1900, calling for clarification of the meaning of expected value in physics.

1930

Ergodic theory was taking shape, with von Neumann and Birkhoff independently publishing clear mathematical definitions of ergodicity in 1932.

In the first half of the 20th century, Bachelier’s model was extended to the multiplicative random walk, and its continuous version, geometric Brownian motion (GBM), which became the standard baseline model for stock prices in the theory of finance. It is a generalization of Whitworth’s multiplicative fair coin toss, and it plays a critical role in ergodicity economics.

1940

In 1944 Kiyosi Itô published his paper “Stochastic Integral” which introduces stochastic calculus — a toolkit for studying non-linear effects of fluctuations, whereas expected value is a linear object. Itô’s work is crucial for studying the consequences of ergodicity breaking in economic models.

Ergodicity, being an equivalence between the average over a large ensemble (the expected value) and the average over time, is really about the relationship between individuals and collectives they form. Where that relationship is trivial, ergodicity prevails, and the collective behaves just like a (big) individual. But where that relationship is interesting, then the collective is systematically different from the individual — the whole is more than the sum of its parts, social dilemmas arise, systematic conflict can exist between different levels of social organization.

Erwin Schrödinger broached this subject with his book “What is life?” Why is a collective of many lifeless things, like molecules, systematically different from those lifeless things and displays life? What would become complexity science is deeply intertwined with the ergodicity problem.

1950

In 1956 John Kelly published his work on gambling, where he proposed a behavioral protocol which optimizes the time-average growth rate of repeated multiplicative (i.e. stock-market-like) investments. Like Whitworth before him, he focused on long-term average performance, not on expected value. The broken ergodicity of geometric Brownian motion makes these two averages different and explains the deviation of people’s real behavior from optimizing expected value. Kelly did not use the word “ergodicity” but his thinking clearly follows this logic.

William Poundstone’s book “Fortune’s Formula” discusses Kelly’s work and its reception in detail.

The Whitworth-Kelly result can be mapped to a basic form of expected-utility theory, assuming logarithmic utility. This mapping infuriates many economists. In their view, utility functions represent the idiosyncratic psychologies of people and are testament to our individuality. The claim that some utility functions are better than others for physical, mathematical, reasons, is a denial of human individuality to them. Of course, it’s just a mathematical fact, and the vehement opposition to it feels perfectly Quixotic. Nonetheless, Samuelson called Kelly’s work “a complete swindle” (see p.237, Fortune’s Formula).

Ed Thorp and Claude Shannon used the result to make money in black-jack card counting schemes and stock-market investments.

In 1957 Aitchison and Brown published their book on the log-normal distribution, where they note (on p.23) that if personal wealth follows geometric Brownian motion, wealth inequality will increase indefinitely. An early recognition of one of the consequences of ergodicity breaking in geometric Brownian motion.

Figure from Aitchison and Brown, showing a mechanical device which simulates 9 time steps of a discrete geometric Brownian motion.

1960

In 1968, Samuelson wrote that in economics an “assumption implicit and explicit in the classic mind [is] the ergodic hypothesis.” His polemics against Kelly’s mathematical result suggest that he is no fan of questioning the hypothesis. The term “ergodicity” is used explicitly here.

1970

In 1972 Phil Anderson wrote his classic paper “More is different” on properties — such as Schrödinger’s life — which emerge only in the collective, fundamental differences between the individual and the ensemble.

1980

Around 1980, Brian Arthur and two Ukrainian colleagues study urns of Pólya type, fully aware of the ergodicity problem. Arthur proposed these systems as non-ergodic economic models for the adoption of competing technologies, and he highlighted their broken ergodicity as the key property which makes them realistic and relevant to real-life economic processes.

Also around 1980, Paul Davidson warned against assuming ergodicity in economics. As far as I know, he did not propose how to use and analyze non-ergodic economic models, but his work names and recognizes the ergodicity problem in the economic context.

In physics, meanwhile, complex systems theory had blossomed, with a deep understanding of ergodicity breaking, especially in the statistical mechanics of far-from equilibrium systems. Spin glass theory is one field where much progress was made on understanding ergodicity breaking. In 1980, Bernard Derrida proposed the random energy model for spin glasses. This can be mapped to geometric Brownian motion, and much insight is gained from this mapping.

Murray Gell-Mann, Phil Anderson, Ken Arrow, George Cowan, David Pines and others push for the establishment of an institution dedicated to the study of complex systems, where ergodicity is broken as a rule, where collective behavior is not a trivial extrapolation of individual behavior. This led to the founding of the Santa Fe Institute.

Stuart Kauffman elaborated on the role of broken ergodicity in complex systems: when there isn’t enough time to explore all possible states, for instance evolutionary time to explore all possible organismal designs, then optimization of any objective will be imperfect and actual realized designs will be path dependent. Much like in Arthur’s Pólya urns, running evolution again will not lead to the same outcome.

1990

In 2000 Jean-Philippe Bouchaud and Marc Mézard published their paper on wealth condensation, in a model which goes beyond Aitchison and Brown’s geometric Brownian motion as it includes interaction among agents. Other groups discover and study the same model independently, for instance, Marsili, Maslov, and Zhang in 1998, and Liu and Serota (2017), and my own group. We later point out that a key parameter in this model controls its ergodic properties, switching between mean-reverting wealth shares and a super-charged rich-get richer phase.

2010

In 2011 I published a solution of the St. Petersburg paradox, arguing that because of the broken ergodicity in multiplicative random growth the optimal behavior of an individual gambler would not optimize expected wealth but time-average growth. Like in Kelly’s and Whitworth’s case, the result can be mapped to using expected logarithmic utility. I’m explicit about the broken ergodicity and its consequences: we don’t need utility to solve the St. Petersburg paradox; properly accounting for broken ergodicity is enough.

Ergodicity becomes a theme on the fringes of economics, and with great interest from finance theory and practice. Many papers on it are published, and the topic is discussed in popular books and in the press. In 2016 I publish Evaluating gambles using dynamics with Murray Gell-Mann. This includes the infamous coin toss, a simple discrete form of geometric Brownian motion, which I introduced in a public lecture in 2011. Alex Adamou and I generalize the problem to find optimal utility functions for arbitrary Itô wealth processes. Peter Carr and Umberto Cherubini discover the same generalization independently in a discrete setup.

With my group at the London Mathematical Laboratory, we begin to feel that we need a name to refer to the diffuse body of knowledge which is crystalizing around us. We settle on “ergodicity economics.” We mean by this a way of doing economics which puts the ergodicity problem center-stage.

In 2019 I publish “The ergodicity problem in economics” in Nature Physics. The editor later informs me that it has become the most-downloaded article ever in the journal. The field is now wide open, any problem in economics can be re-examined: can we make progress by carefully investigating the ergodic properties — ergodicity, ergodicity breaking, relevant time scales, ensemble sizes etc. — of the system under study?

2020

In 2021 Meder et al. publish results from laboratory experiments, exploring the range of validity — the range of superiority over traditional decision-making models — of some simple ergodicity-economics models. A program for studying the problem is set up at the Danish Research Center for Magnetic Resonance. Machine learning and neuroscience become new areas of application of the ideas. Experimental work also begins at Vrije Universiteit Brussel.

An image used as a stimulus in the Copenhagen experiments on ergodicity economics.

By “ergodicity economics” we mean the perspective that the individual may over time experience something systematically different from the average over the statistical ensemble. Over the past 4 centuries, numerous researchers have stumbled across this problem and have used it to critique mainstream economics, or constructively to contribute to our alternative understanding of behavior. In the past, many of these researchers acted as lone wolves because the problem had not yet become part of mainstream economic knowledge. Most of these researchers were not economists and only wrote a single paper or book chapter on the subject. Like all of them, I knew nothing about this strange community, dotted over the centuries and continents, when I started studying the problem. I have made some original contributions to the field, and those discoveries were exciting, notwithstanding the fact that others would have sooner or later made the same discoveries. It gives me great joy that beyond this I have contributed to synthesis: the individual researchers are all connected in their thinking, and the result is a coherent body of work.


Curious about Ergodicity Economics?
Join our mailing list.

Seriously curious?
Get the textbook.


Author

7 thoughts on “Ergodicity economics — a history”

  1. Thanks for posting this, it’s a very helpful map to the development of the subject, as well as a great reading list.
    I recently read Fortune’s Formula in anticipation of the conference, and Samuelson’s criticism of Kelly’s work was something I was struggling to understand. It seems to me that a significant basis for the disagreement was whether one’s objective was prediction versus prescription. For most economists (including Samuelson it seems), the goal is to predict economic outcomes for a given population/ensemble, implicitly or explicitly in service of a moral/ethical supra-goal of delivering insights that might enable improving its total utility. But for investors and gamblers, the objective is to make money within a given risky opportunity space. Different problems, different solutions. Seems this might explain why Samuelson was not satisfied with Kelly’s approach. It did not necessarily align to serving the greater good by taking into account possible non-monetary costs and benefits.

  2. Pingback: Convex Strategies | Risk Update: January 2024 – “Butterfly Effect”

  3. Pingback: Things I’ve Enjoyed #163 – Long vol, short prediction models

  4. Pingback: Ergodicity economics — a history. – P.S. You Should Know…

  5. As a longtime amateur probabalist (discovering Feller’s book in college in 1966 was the start) I just (Nov ’24) stumbled across your articles and this blog. To say “very interesting” is vastly understated. Have started reading Piketty. Now wondering if he will make reference to anything like your comment regarding the Aitchison and Brown book “… if personal wealth follows geometric Brownian motion, wealth inequality will increase indefinitely.” Thank you for the wonderful, detailed, thoughtful and well written work.

Leave a Reply to Aamer Khan Cancel Reply

Your email address will not be published. Required fields are marked *