Ergodicity, we all know, is a formal mathematical concept which applies to mathematical models. In order to think about cooperation using the ergodicity concept we therefore have to write down a formal model where cooperation may or may not take place.

Our starting point is The infamous coin toss I invented many years ago, and we will generalize it. If you’ve read the blog post on it, you can skip this section and go straight to “They give that they may live”.

The coin toss is a simple example of random multiplicative growth: on heads you increase your current wealth by 50\%; on tails you lose 40\% of it (for ergodicity economics regulars, this is a discretized form of geometric Brownian motion). This represents the situation where there’s a quantity x of some stuff. This stuff has the multiplicative property of self-reproduction, which is common to all living things: it grows and shrinks by an amount in proportion to how much of it is currently there. The growth has a systematic trend, and also a lot of randomness. Examples of real systems with such growth include the biomass of bacteria in a sugar-filled Petri dish, or dollar-wealth in an investment account, or the population of rabbits in a field.

The coin toss has become a go-to model for explaining ergodicity and ergodicity-breaking. In stochastic processes, ergodicity expresses a type of stationarity. It implies that the expected value of a randomly changing quantity is the same as the time average of that quantity. In the coin toss, ergodicity is broken. The expected value after one round of playing is 5% more wealth (of course: there’s an equal chance of 50% up and 40% down). Since the rules of the game don’t change, expected wealth grows exponentially at 5% per round for all eternity. This suggests that it’s a game worth playing, but the ergodicity problem tells us that what happens with certainty to the expected value may not be what happens with certainty to individual wealth over time, and the coin toss is a case in point.

Over time, an individual playing the game experiences 50% gains and 40% losses in sequence, and in the long run, it will see just as many ups as downs (because the coin is fair). The result of a 50% up *followed by* a 40% down is not two 5% gains but a 10% loss, going from $100, say, first to $150 (the 50% up) and then down to $90 (the 40% down). The way the mathematics works out, we thus have the curious situation where the expected value of our wealth increases by 5% per round, but over time our particular personal wealth is guaranteed to decrease by about 5% per round (10% every 2 rounds).

This innocuous-looking gamble is a powerful tool, a window affording us a rather different view of economics, ecology, evolution, and complexity science. Hence all the excitement about it. To get an idea of how it behaves, choose how many rounds of the gamble you want to see and hit “Simulate” in the app below. The green straight line is the analytical result for what happens in the long run to an individual player; the red straight line is the expected value, and the wriggly random blue line is what happened to your particular wealth in this simulation. Simulate as often as you like to get an idea for how variable the results are.

But in this blog post we are going one step further: it’s not about a lonesome coin but about cooperation. This coin toss is taunting us — it has that wonderful expected value, increasing exponentially, and yet when we play it, we’re bound to lose. Isn’t there some trick we can apply? Some way of harvesting something of those great expectations, carrying over the promise from the statistical ensemble into the individual trajectory?

The answer is yes — and that’s one reason why ergodicity economics has become such a hot topic. There’s a very simple cooperation protocol which allows us to benefit from the coin toss. Here it is: find a partner, independently play one round each, then pool your wealth and split it evenly. Then play the next round independently, as illustrated in the figure below.

With the parameters of the gamble, pairing up in this way leads to a time-average growth rate of the wealth of the cooperating pair of -0.2% per round, compared to -5% per round for the individual player — the cooperators outperform the non-cooperators exponentially and almost break even. With one extra cooperator applying the same protocol — play independently, pool wealth, share equally — the gambling gang moves into positive territory. The time-average growth rate of the cooperating triumvirate is +1.5% per round. In this simple game, Gibran is very literally — mathematically — right: the solitary entity decays, dies with certainty. A few entities who have learned to give, on the other hand, may live.

Try it out by setting the number of cooperators to something bigger than one in the app below.

The cooperating gamblers are not doing anything new, in a sense. They’re still just gambling, they haven’t developed any special skills, they can’t predict how the coin will land. All they’ve learned is to share, and just that allows them exponentially to outperform their non-cooperating peers (or former selves).

We can keep growing the group, and in the limit of infinitely many cooperators, wealth grows at the growth rate of the expected value.

Naturally, this is quite a change in perspective for researchers who are used to optimizing expected wealth (that’s most economists, for instance). Such researchers see no value in cooperation unless new function emerges from the interaction. I can lift you up on my shoulders, and together we’re tall enough to reach an apple on a tree. That sort of thing is understood, where my shoulders acquire the new function of lifting someone up, which they cannot have while I’m alone. But the value of simply agreeing to share my apples with you is not appreciated.

Here is the reason why economics undervalues cooperation, and it’s oddly convoluted so I recommend reading the next two sentences carefully. By focusing on expected value, mainstream economics focuses on an object which grows as fast as the wealth of an infinite cooperative. Adding cooperation in this situation, where it is inappropriately assumed that perfect cooperation already exists, naturally seems pointless. Hence the impression of cold-heartedness we get from mainstream economic theory? I think so. I think this is one important reason, at least.

For full details, including the effect of differences in skills and of correlations among coin tosses of individual gamblers, read our paper in the Royal Society’s Philosophical Transactions (open access). Using time-average growth rates as the single criterion, there are limits for who should join, and who should be accepted into, a cooperating group. But the limits look different from what linear expected-value thinking suggests. The most skilled can still do better by joining a less skilled collective, and we see mathematically how important it is to maintain diversity and avoid loss of identity in a cooperating group. Lorenzo Fant et al. show in their stability analysis that cooperation can be worth it even if your partner is less cooperative than you, published in Physical Review E (free version here).

We are beginning to relate the mathematics to real social observations. For instance, Athena Aktipis and her coworkers have studied attitudes and moral codes concerning cooperation in different societies, for instance among Maasai pastoralists in East Africa. Their work indicates that where survival is key, more generous systems of mutual aid emerge. Generosity may be thought of as a spectrum reaching from the individual coin toss, where no aid is ever received or provided, to cooperative coin tossing where “aid” is provided at every step whether it’s needed or not. In between lie different forms, where records may be kept to ensure future repayment of aid, or where the severity of need determines the degree of aid provided, with no expectation of repayment.

The coin toss says: where unintended consequences can be avoided, the more sharing takes place the faster we will make progress. The optimal level of cooperation appears not as the minimum required to avoid disaster. It is instead the maximum we can get away with without triggering unintended consequences.

]]>We will discuss the fundamental insurance puzzle – why do insurance contracts exist? Because this is such a basic question, we don’t need to go into great actuarial detail, but we do need to sketch what we mean by insurance. In essence, the story is this: you’re exposed to some risk, something that might happen in the future (a fire or a car crash etc). A bad event beyond your control with bad consequences. Insurance contracts allow you to mitigate the financial aspect of those consequences: you can buy an insurance policy for a fee, and you will receive a cash payout if the bad thing happens.

We formalise this in purely financial terms as follows. You currently have some wealth x , and you’re exposed to the risk of losing an amount L , which happens with probability p over a future time period \Delta t . You can insure yourself against this loss by paying a fee F .

Intuitively, we all know that we will want to buy such contracts under certain circumstances. We may want to insure our house against fire, or our car against theft, provided the fee isn’t too high. So what’s the puzzle?

The puzzle arises when we consider that it takes two parties to set up an insurance contract – someone, some entity, has to be on the other side of the deal. Someone has to be willing to sell you the insurance at a fee which you are willing to pay. Clearly, the seller wants to charge as much as possible, and you want to pay as little as possible. So a fundamental question arises: is there any price at which both the seller and the buyer will be happy to sign the deal?

If the fee is such that you’re happy to pay it to get rid of the risk, then clearly, your assessment is that that’s better than keeping the fee and the risk. But why, then, would your counterparty (usually an insurance company) come to the different assessment that it’s better to receive the fee and the risk? If this difference in opinion doesn’t exist, then no contract will be signed — for a contract to emerge, both parties must feel they’re improving their position by signing.

Colloquially, we have established that there could be a problem. We will now show that economic theory formally encounters this problem when it analyses insurance contracts through the lens of expected value. Let’s write down the change in your expected wealth, which is brought about by signing the insurance contract. Compared to your wealth before signing, you lose the fee and gain the expected loss (because you’re no longer exposed to it). In other words, your expected future wealth changes by the amount

Eq.1 \Delta E(x_{\text{buyer}}) = + E(L) - F

You get rid of the expected loss, which is a positive contribution to your future expected wealth, and you pay the fee, which is a negative contribution. However, similarly computing the change in expected wealth for the seller of insurance yields

Eq.2 \Delta E(x_{\text{seller}}) = - E(L) + F .

Compared with Eq.1, this is just the negative of the result for yourself. In general, we have the symmetry

Eq.3 \Delta E(x_{\text{buyer}}) = - \Delta E(x_{\text{seller}}) ,

which makes insurance a zero-sum game in terms of expected wealth. That’s the problem: whatever the seller gains in expectation is what the buyer loses. Whatever fee we choose, one of the parties always loses in expectation and therefore has no reason to sign the contract.

**The insurance puzzle:** *according to expected-wealth theory, insurance contracts should not exist.*

At this level of analysis, insurance is a murky business: it seems that the game is for one party to outsmart the other, misrepresent risks or their probabilities, or look for unsophisticated or outright irrational buyers or sellers to exploit.

Classical economics doesn’t have a good answer to this problem, or it agrees with the murky-business interpretation. The classic answers are “asymmetric information” (I know something you don’t) or “asymmetric risk preferences” (I’m more or less comfortable with risks than you are). The first answer boils down to that murky business, and the second is really just a restatement of what we’re trying to explain — the real question, though, is: why do people have different risk preferences?

In the case of insurance, the immediately important quantity to focus on is wealth: how does personal wealth behave over time? How might we model it? As so often, it’s helpful to use the model of multiplicative growth, where equal percentage changes in your wealth have fixed probabilities over time. This reflects the fact that if you have a lot of wealth, you can invest a lot and stand to gain or lose large dollar amounts, and if you have little wealth, well, then you can’t invest much. The returns on investments you can make – whether financial investments in the stock market or investments in your health, education, or living conditions – scale to some extent with the level of your wealth.

In multiplicative growth, the expected value is no good guide to the evolution of the wealth of an individual, as canonically exemplified by the infamous coin toss or explained in this video. Instead of expected wealth – because of the ergodicity breaking in multiplicative growth – time-average growth rates of wealth better capture what happens to the actual wealth of an individual over time.

Considering the insurance problem in terms of time-average growth rates solves the insurance puzzle, as detailed in arXiv1507.04655 (2015). While the symmetry in Eq.3 holds for expected wealth, it does not hold for time-average growth rates: both the buyer and the seller of insurance can gain over time by signing insurance contracts, even though this is not possible in the expected-wealth picture favoured by mainstream economic theory. Using the temporal perspective of ergodicity economics breaks the symmetry in Eq.3. Not because of asymmetric information, not because of irrational perspectives on risk, but simply because the agents may have different wealth situations.

We can now revisit the insurance puzzle in terms of changes in time-average growth rates. For the buyer, we consider the time-average growth rate of the buyer’s wealth, both with insurance,

Eq.4 g_{\text{buyer}}^{\text{with}} = \frac{1}{\Delta t} \ln\left(\frac{x_{\text{buyer}}(t) - F}{x_{\text{buyer}}(t)}\right)

and without insurance,

Eq.5 g_{\text{buyer}}^{\text{without}} = \frac{1}{\Delta t} p \ln\left(\frac{x_{\text{buyer}}(t)-L}{x_{\text{buyer}}(t)}\right).

Both growth rates are always negative in our setup because the buyer’s wealth is guaranteed to drop, or stay unchanged in the best case. We can also see that the growth rates diverge negatively as either the fee or the potential loss approach the wealth of the buyer. This reflects the fact that the multiplicative dynamic we are investigating has a natural boundary at zero: losing everything in a multiplicative game means we cannot recover because even if we multiply zero with an arbitrarily large gain we will still stay at zero.

Next, we use these growth rates to compute the maximum fee at which it is still beneficial to buy insurance, F_{\text{buyer}}^{\text{max}}. This is the value where the time-average growth rate without insurance, g_{\text{buyer}}^{\text{without}}, equals the time-average growth rate with insurance, g_{\text{buyer}}^{\text{with}},

Eq.6 \underbrace{\frac{1}{\Delta t}\ln\left(\frac{x_{\text{buyer}}(t)-F_{\text{buyer}}^{\text{max}}}{x_{\text{buyer}}(t)}\right)}_{g_{\text{buyer}}^{\text{with}}}= \underbrace{\frac{1}{\Delta t}p \ln\left(\frac{x_{\text{buyer}}(t)-L}{x_{\text{buyer}}(t)}\right)}_{g_{\text{buyer}}^{\text{without}}}.

Rearranging and substituting from Eq.5, we find

Eq.7 F_{\text{buyer}}^{\text{max}}=x_{\text{buyer}}(t)\left[1-\exp(g_{\text{buyer}}^{\text{without}}\Delta t )\right].

Staring at this equation, let’s remember that g_{\text{buyer}}^{\text{without}} diverges negatively as the potential loss approaches the wealth of the insurance buyer. Eq.7 tells us that someone facing the potential loss of everything he owns is well advised to pay his entire wealth as an insurance fee even if the loss will only happen with a small probability. A fee far greater than the expected loss can be extracted from an individual facing an existential risk, and an individual in such dire straights is in a sense well advised to pay such a fee.

Next, we follow a similar argument to arrive at the minimum fee at which it is long-term beneficial to sell insurance, F_{\text{seller}}^{\text{min}}. Again, we write down the time-average growth rates with and without insurance, this time for the wealth of the seller

Eq.8 g_{\text{seller}}^{\text{with}}=(1-p)\frac{1}{\Delta t}\ln\left(\frac{x_{\text{seller}}(t)+F}{x_{\text{seller}}(t)}\right)+p \frac{1}{\Delta t}\ln\left(\frac{x_{\text{seller}}(t)+F-L}{x_{\text{seller}}(t)}\right)

and

Eq.9 g_{\text{seller}}^{\text{without}}=0 .

Equating Eq.8 and Eq.9, we again find the fee where signing the contract makes no different to the time-average growth rate and arrive at the expression

Eq.10 \left(\frac{x_{\text{seller}}(t)+F_{\text{seller}}^{\text{min}}}{x_{\text{seller}}(t)}\right)^{(1-p)}=\left(\frac{x_{\text{seller}}(t)+F_{\text{seller}}^{\text{min}}-L}{x_{\text{seller}}(t)}\right)^{(-p)}.

Eq.10 does not have a neat closed-form solution but is easily solvable numerically.

From the perspective of ergodicity economics, a mutually beneficial range of insurance fees exist, and agents optimizing time-average growth rates will sign insurance contracts, whenever

Eq.11 F_{\text{seller}}^{\text{min}} <F_{\text{buyer}}^{\text{max}}.

So does this actually work as an explanation for the emergence of insurance? In other words, do agents who use the time-average growth rate to decide whether to buy or sell insurance outperform agents who are trapped in expected-value thinking and only sign contracts which increase their expected wealth?

This question was recently addressed numerically in Annals of Actuarial Science (2023), 17, 215–218, and we reproduce the findings here. Agents are split into two groups with different responses to risk. Agents in group A sell and buy insurance contracts from each other when this increases their time-average growth rate. Agents in group B are fully informed rational expected-wealth optimizers and don’t sign insurance contracts as a result of the insurance puzzle. Group A acts according to the ergodicity-economics model, group B acts according to expected-wealth theory. For maximum simplicity, we have just 2 agents in each of the two groups, all starting with the same wealth, x(0) = 1 , and in each time step, we randomly choose one agent in each group to face a risk. To keep the comparison between the two groups as fair as possible, we make sure that if, for example, agent 1 of group A faces a risk, then it’s also agent 1 in group B who faces a risk. In the simulation below, you can set the risk that the agents face by changing the loss fraction — that’s the potential loss as a proportion of current wealth — as well as the loss probability. By default, the loss fraction is 0.95, so that L=0.95 x_i(t), and the loss probability is p=0.05, but feel free to play around with the parameters. Again, we keep the comparison between the groups fair: if agents 1 are facing a risk, then either agents 1 in both groups suffer a loss or neither does.

Results are shown in the app below. Hit “Simulate” a few times to get a feeling for the typical performance of agents in the two groups.

The first thing we notice is that over time, the expected wealth (red dashed line) is an unachievable fiction for all agents. Next, we see the main result, namely that when the risk is high, in the long run, the 2 agents (blue) who judge insurance by time-average growth and often set up contracts exponentially outperform the 2 agents (orange) who judge insurance by expected wealth and always reject it, despite the identical loss fraction and loss probability in both groups.

Looking at these results systemically, some interesting questions emerge, which are the subject of our currently ongoing research. For example, rich entities tend to insure the poor ones creating a flow of wealth from poor to rich via the fees that are being paid. This is already interesting: without coercion, purely because it is long-time optimal for both parties, the poor pay the rich. This raises the question what kind of an ecology of rich and poor emerges. What sort of wealth distribution do these dynamics give rise to? When we look at the simulations above, it seems that Group A, where insurance contracts are signed, has less inequality than Group B, despite the flow of wealth from poor to rich. One effect we’ve observed is that while the richest agent can dominate for a long time (depending on the parameters of the simulation), it can never fully break away from the herd. As the richest agent begins to dominate systemically, it eventually runs out of willing insurers, and abrupt shifts in the wealth hierarchy can occur. This interplay between poor and rich is an interesting question relating to all kinds of real-world happenings, from financial crises to political revolutions. We will share what we find out about these questions here on the blog in the future, and of course, you are invited to run your own simulations and join us in our explorations.

References:

O. Peters, Insurance as an ergodicity problem. Annals of Actuarial Science (2023), 17, 215–218.

O. Peters and A. Adamou, Insurance makes wealth grow faster. arXiv1507.04655 (2015).

The code for the simulations in the AAS article can be downloaded here.

]]>The gamble I’m about to describe, now sometimes called `the Peters coin toss,’ is discussed in detail in my 2016 paper with Murray Gell-Mann, there’s a youTube video about it on the ergodicity.tv channel, and my public talk from 2011 is also available online. In this post, I will present the basic setup, hint at its significance, and then mention a few generalizations.

Imagine I offer you the following gamble. I toss a fair coin, and if it comes up heads I’ll add 50% to your current wealth; if it comes up tails I will take away 40% of your current wealth. A fun thing to do in a lecture on the topic is to pause at this point and ask the audience if they’d like to take the gamble. Some will say yes, other no, and usually an interesting discussion of people’s motivations emerges. Often, the question comes up whether we’re allowed to repeat the gamble, and we will see that this leads naturally to the ergodicity problem.

The ergodicity problem, at least the part of it that is important to us, boils down to asking whether we get the same number when we average a fluctuating quantity over many different systems and when we average it over time. If we try this for the fluctuating wealth in the Peters coin toss the answer is no, and this has far-reaching consequences for economic theory.

Let’s start with averaging wealth, x_i(t) over an ensemble of many different systems. In our case this corresponds to N different players, each starting with x_i=\$100, say, and each tossing a coin independently. After the coins have been tossed, about half of the people will have thrown heads, and the other half tails. As the number of players goes to infinity, N\to\infty, the proportions of heads and tails will approach 1/2 exactly, and half the players will have $150, the other half $60. In this limit, we know what the ensemble average will be, namely \langle x(1)\rangle = 1/2\left(\$150 +\$60\right) = \$105. For historical reasons, this average is also called the expected value, and for the Peters coin toss, it grows by 5% in every round of the gamble so that

\langle x(t)\rangle = \$100\times 1.05^t .We can recover this average numerically by setting up a Monte Carlo simulation: let a large number, N, of agents play the game for T rounds, then average over N. You can see what happens for T=10 as you increase N in the app below.

Choose the number of players and hit “Simulate.” The red straight line shows the expected value for the first 10 rounds (starting at 1). The light grey lines are the wealth trajectories of individual players, and the solid blue line is the average over the ensemble.

To see that the gamble is not ergodic, let’s now find the average value of wealth in a single trajectory in the long-time limit (not in the large-ensemble limit). Here, as T grows, again the proportions of heads and tails converge to 1/2. But, crucially, a head and a tail experienced sequentially is different from two different agents experiencing them. Starting at x_1(0)=\$100, heads takes us to x_1(1)=\$150, and following this sequentially with tails, a 40% loss, takes us down to x_1(2)=\$90 — we have lost 10% over two rounds, or approximately 5% per round. Since we lose 5% per round, averaged over time, individual wealth is guaranteed to approach zero (or negative infinity on logarithmic scales) in the long-time limit T\to\infty. You can try this out too, in the app below, by simulating the game for different numbers of repetitions.

Choose the number of rounds to play and hit “Simulate.” The red straight line shows the expected value, the green straight line decays at the time-average growth rate. The light grey lines are the wealth trajectories of individual players, and the solid blue line is the wealth of the one individual we’re simulating (all starting at 1 as before).

We have thus arrived at the intriguing result that wealth averaged over many systems grows at 5% per round, but wealth averaged in one system over a long time shrinks at about 5% per round. Plotted on logarithmic vertical scales, this gives one of the iconic images of ergodicity economics, the featured image of this post (taken from my 2019 Nature Physics paper).

The significance of this ergodicity breaking cannot be overstated. First, all living processes, including economic growth processes are similar to the coin toss in the sense that they rely on self-reproduction. The number of rabbits, or viruses, or the dollars in your trading account, grow in a self-reproducing noisy multiplicative way (until some carrying capacity is reached), just like wealth in the Peters coin toss. Second, most mainstream economic decision theories are based on the concept of expected value, and all of that has to be questioned in the presence of ergodicity breaking. Third, one core problem of economics and politics is to address conflicts between an individual, for example a citizen, and a collective, for example a state. This is the question of societal organization, institutions, Rousseau’s social contract and so on. This problem can seem puzzling, and it often attracts naive answers, because the collective consists of individuals. How, then, can the interests of the individual be misaligned with those of the collective? One important answer is ergodicity breaking.

The coin toss is often the starting point for more detailed investigations. We may allow players to withhold some of their wealth and only subject a fraction of it to the coin toss dynamics. This gives us Kelly betting and optimal leverage. We can allow players to pool their resources, which leads to the ergodicity solution of the cooperation puzzle, and to the emergence of complexity. Or players may pool a proportion of their wealth, leading to reallocating geometric Brownian motion and an intriguing perspective on the dynamics of wealth inequality.

I’ll end here and encourage you to gamble away and meditate on the many curious stories this simple little coin toss has to tell us.

]]>