Ergodicity economics is an umbrella term for addressing issues in economics research by carefully considering the ergodicity problem — that’s the problem that the expected value of something may be different from its time average. One key finding which illustrates the field’s significance within the history of economics is the mapping between utility functions and ergodicity transformations. This mapping provides a physical (as opposed to psychological) interpretation of where utility functions really come from. It also highlights a weakness of expected-utility theory: agents who follow the prescriptions of expected-utility theory do not maximize utility in the long run; agents who follow the prescriptions of ergodicity economics, on the other hand, do maximize utility in the long run.
Perhaps the simplest way to explain the problem is to consider the infamous coin toss. Here, expected wealth goes up as time passes, while wealth itself is guaranteed to go down in the long run. In terms of expected-utility theory this is problematic. Imagine a linear-utility agent, meaning a so-called risk-neutral agent, who just cares about dollars, not some non-linear function of dollars. Because wealth is then the same as utility, expected utility goes up in the infamous coin toss, but utility itself is guaranteed to go down. We conclude: expected utility is a bad thing to maximize if you want to maximize utility.
This is worth saying again: if you want utility, don’t optimize expected utility because you’re bound to lose actual utility. How is this possible? Well, broken ergodicity. Expected utility is an average over the statistical ensemble, and that looks rosy. But you don’t have access to the average over the statistical ensemble, only to one realization. And in each individual realization you’re guaranteed to lose as time goes on. If this is puzzling, then ergodicity economics is for you. It’s all about understanding this deep puzzle and using our understanding of it to make sense of the real world via modern mathematical models.
Mini history of decision theory
Expected-value theory
Historically, the first fully quantitative model (1650s) of human behavior under risk was expected-value theory (EVT). It was invented in the context of simple gambling tasks, like tossing a coin to win or lose some amount of money. Let’s say someone offers me such a gamble, and I want to apply the EVT framework. I would follow these steps:
1. model the gamble on offer as a random variable whose possible values are the possible net changes in dollar wealth (like net-win $5 or net-lose $3) with sensible probabilities (like 50/50 for a fair coin toss).
2. Compute the expected value of the random variable.
3. If this is positive, accept the gamble; otherwise reject.
Expected-utility theory
It was soon noticed that this model was not a good description of what real people do. Often, we reject gambles with positive expected value. This realization led to the development of expected-utility theory (EUT, 1730s). In this framework, it is posited that people don’t value money linearly. To reflect this, a “utility function” u(x) is defined. It increases with wealth, x, and is usually non-linear to account for the failure of expected-value theory. For instance, if the utility function is concave as is typically assumed, then a given dollar loss decreases utility more than a dollar gain of the same size increases utility. Two-hundred-fifty years later concavity was repackaged as the alliterative phrase “losses loom larger than gains.”
The steps of expected-utility theory are very similar to those of expected-value theory.
1. Model the changes in utility as a random variable (given by the transformation u(x) ).
2. Compute its expected value.
3. Accept the gamble if the expected utility change is positive.
The message from expected-utility theory is this: you don’t really care about dollars, you care about the usefulness of dollars.
The promise of expected-utility theory is, of course: specify your utility function, stick it into the formalism, follow what the formalism tells you to do, and you will maximize utility (though possibly not wealth).
But EUT does not keep its promise. You can see this immediately: utility is an increasing function of wealth. It’s non-linear, but we still prefer more wealth to less wealth. Therefore, more wealth always means more utility. If the expected-utility framework does not maximize wealth, then it also doesn’t maximize utility.
So what is going on? In a second, we will go through an explicit example calculation, but before we do that, here is the problem: expected-utility theory maximizes expected utility. But expected anything is something very strange — physically (if that’s the right word), it’s an average over parallel universes. Perhaps it’s better to say that it has no immediate physical real-world meaning. Consequently, maximizing it doesn’t guarantee anything about the real world; only about a mathematical ensemble of hypothetically possible worlds. Sometimes such an imagined ensemble has real-world significance, but if that is the case, it has to be carefully established, not simply assumed.
The leverage problem
Since this is all a bit abstract, let’s move on to a concrete computation. The example is a standard problem every ergodicity economist, or indeed finance student, sees somewhere early on in his or her education — leveraged geometric Brownian motion.
Here, an agent gets to decide how much risk to take by choosing its exposure to a risky asset whose value, y(t), follows geometric Brownian motion,
(Eq. 1) dy=y(\mu dt + \sigma dW),
with the usual nomenclature and notation: \mu is called the drift, \sigma volatility, t time, and dW is a Wiener process.
The agent chooses how much of its wealth to invest in this asset — that’s the so-called leverage problem. In other words, the agent chooses the leverage, l, in the wealth process
(Eq. 2) dx=lx(\mu dt + \sigma dW).
For simplicity, we have assumed here that the risk free interest rate is zero — the agent can borrow for free and receives nothing for deposits. As a consequence, if the agent chooses to invest nothing in the risky asset, leverage l=0, it will gain nothing, and wealth is constant, dx=0. With leverage l=1, everything is invested in the risky asset and Eq.2 looks just like Eq.1, and with leverage l=2 both the drift and the volatility of the agent’s wealth are twice as large as those of the risky asset, and so on.
Ergodicity economics solution
Ergodicity economics tells us how to solve the leverage problem [1] by choosing the leverage which guarantees greatest wealth in the long run. After a little Ito calculus we arrive at the time-optimal leverage, which is
(Eq. 3) l_{\text{opt}}^{\text{EE}}=\frac{\mu}{\sigma^2}.
At this leverage, our wealth x(t) grows faster than at any other leverage as time passes.
Expected-utility theory solution
Expected-utility theory is conceptually different: it finds the leverage which maximizes expected utility, not the leverage that makes wealth grow fastest. Because utility is monotonic in wealth, agents who follow the recommendations of ergodicity economics are guaranteed to obtain more utility in the long run than expected-utility agents.
That’s a bit crazy — expected-utility theory prides itself on helping people obtain utility, and ergodicity-economics is often criticized for focusing on wealth and not utility. Nonetheless, ergodicity economics guarantees not only greater wealth but also greater utility in the long run.
We illustrate this by computing the optimal leverage according to EUT (which maximizes expected utility) for a popular class of utility functions, called isoelastic utilities, which have the following form
(Eq. 4) u(x; \eta)= \frac{x^{1-\eta}-1}{1-\eta},
where \eta is usually called the risk-aversion parameter — the greater it is, the more an EUT agent will shy away from risk. We now compute the expected change in this utility function, given the wealth dynamic, Eq.2, and then find the leverage for which it is maximized. Using the utility function in Eq.4, we apply Ito’s formula to find the changes in utility,
(Eq. 5) du = \frac{\partial u}{\partial x} dx+\frac{\partial u}{\partial t} dt+\frac{\partial^2 u}{\partial x \partial t} \overset{0}{\sout{dx dt}}+\frac{1}{2}\frac{\partial^2 u}{\partial t^2}\overset{0}{\sout{dt^2}}+\frac{1}{2}\frac{\partial^2 u}{\partial x^2} (dx)^2+\text{h.o.t.}
= \frac{\partial u}{\partial x} dx+\frac{1}{2}\frac{\partial^2 u}{\partial x^2} (dx)^2+\text{h.o.t.}= \frac{\partial u}{\partial x}[x(l\mu dt+l\sigma dW)]+ \frac{1}{2}\frac{\partial^2 u}{\partial x^2}[x^2 l^2\sigma^2[\overset{dt}{\sout{dW^2}}]]+\text{h.o.t.} .
Next, we drop the higher-order terms (h.o.t) and substitute the first and second derivatives of the utility function. They are \frac{\partial u}{\partial x}=x^{-\eta} and \frac{\partial^2 u}{\partial x^2}=-\eta x^{-\eta-1}.
Substituting in Eq.5, we find that the changes in utility are
(Eq. 6) du = x^{1-\eta} [(l\mu-\frac{\eta}{2}l^2\sigma^2)dt + l \sigma dW]
The expected value of the utility change is now trivially calculated (it just amounts to dropping the term with dW) as
(Eq. 7) E(du)=x^{1-\eta} (l\mu-\frac{\eta}{2}l^2\sigma^2)dt.
It only remains to maximize with respect to the leverage,
(Eq. 8) \frac{E(du)}{dl}=x^{1-\eta}(\mu-\eta\sigma^2l) dt,
which is zero at
(Eq. 9) \mu=\eta\sigma^2 l ,
implying the EUT-optimal leverage
(Eq. 10) l_{\text{opt}}^{\text{EUT}}=\frac{\mu}{\eta \sigma^2}.
The value of this EUT-optimal leverage is entirely set by the utility function of the agent, here represented by its risk-aversion parameter \eta. Depending on the agent’s psychology and intrinsic preferences, EUT-optimal leverage can take any value from negative to positive infinity. Therefore, except for the small range of values 0<l_{\text{opt}}^{\text{EUT}}<\frac{2\mu}{\sigma^2}, an agent behaving according to EUT will destroy its wealth because it will choose a leverage at which wealth is systematically gambled away exponentially fast. This is an embarrassing feature of EUT. It destroys wealth and therefore the utility it aims to maximise.
Two concepts of optimal
We now have two optimal solutions: the EE solution guarantees maximum actual wealth and maximum actual utility in the long run; the EUT solution guarantees maximum expected utility but this is unrelated to actual wealth and actual utility.
To let you explore what these equations mean, we’ve written App 1 below, where you can choose a utility function by setting the risk-aversion parameter \eta and the time scale over which you want to simulate. Hit “Plot utility function” to see what u(x) looks like. Next, hit “Simulate agents:” the wealths and utilities of two agents will be displayed, as the agents leverage according to their optimality criteria (the risky asset follows Eq.1 with \mu=0.05 and \sigma=0.2). One agent chooses its leverage according to EE (blue lines) and the other according to EUT (orange lines). Of course, in a simulation, which is always finite, there is a chance that the EUT agent ends up with more utility than the EE agent. In the long-time limit this is impossible unless the EUT agent uses as the utility function something we call the ergodicity transformation (which is determined by the dynamic, not by intrinsic preferences). In that special case, EUT agents are guaranteed to behave exactly as EE agents. So we arrive at the curious situation where either EUT leads to less utility than EE; or else EUT is restricted to be equivalent to EE in which case the EUT agents obtain the same utility as the EE agents.
App 1: Set the risk aversion parameter \eta and the time over which you want to simulate.
This raises the question what EUT has to offer beyond EE. EUT is by its nature a curve-fitting exercise: there’s nothing to predict the utility function, and nothing to explain why people have the utility function they have. It has also been found to do very poorly predictively, with a major study, reviewing seven decades of empirical work, concluding that its “power to predict out-of-sample is in the poor to non-existent range.“[2]
As we have shown here, EUT is unable to do what it set out to do: maximise utility. With this in mind, the empirical failure of EUT may be less surprising.
EE is different. The ergodicity transformation is determined by the dynamics, it’s not a free parameter, and at this level there is no fitting involved. EE has been found to do well in predicting behavior.
Perhaps EE clarifies why EUT fails, predictively speaking: EUT does not take the dynamics into account but they are an important reason for people to behave the way they do. Inferring the utility function from observed behavior (called “revealed preferences” in economics) is not terribly meaningful in a predictive sense because the dynamic circumstances of an observed individual may change from the time its utility function was fitted to the time that utility function is used to make predictions.
The beauty of EE is that we know precisely why it does what, what it explains, and what predictions we can expect from it. The perspective offered by EE leaves us with a stark choice when it comes to EUT. Either EUT fails to maximise utility over time, challenging its raison d’etre; or EUT becomes precisely ergodicity economics, in which case it adds nothing.
Curious about Ergodicity Economics? We recommend the textbook An Introduction to Ergodicity Economics.
You may also want to join our mailing list for all things ergodicity economics.
References:
[1] see Chapter 10, An Introduction to Ergodicity Economics (2025), O. Peters and A. Adamou.

[2] see p.3, Risky Curves – On the empirical failure of expected utility (2014), D. Friedman, R. M. Isaac, D. James, S. Sunder.


Excellent article. One question.
In the EUT section, the article mentions that concavity was repackaged as “losses loom larger than gains” 250 years later.
To me, that phrase suggests Loss Aversion a la Kahneman. If so, my understanding is Loss Aversion is not merely repackaging concavity, but rather positing that utility functions are concave for gains but (somehow!) convex for losses.
Said differently: To me, the phrase “losses loom larger than gains” means more than what it literally says. In particular, my head substitutes the phrase “losses loom larger than gains” with the phrase “Loss Aversion”. My head does this because I’ve never seen this phrase used outside the context of Loss Aversion.
So *if* this mental substitution is valid, the phrase means more than utility functions are concave and I don’t see it as a mere re-packaging (as it does add something new). In particular, it means concave for gains and convex for losses. Or, “risk seeking for losses”, if preferred.
To be clear I’m not making any statement about the validity of the theory of Loss Aversion itself. I am just noting that I view a fair expression of it as more than just concavity repackaged.
Am I off the mark here?
I think you are right, the loss aversion parameter within Prospect theory is not simply repackaging concavity, because it is a parameter that allows for losses to loom larger, despite the fact that the curvature changes from concave for gains to convex for losses. I think the general point we were trying to make is that concavity implies loss aversion, in the specific sense that the loss of utility for a loss of 1 unit, is larger than the equivalent gain of utility for a gain of 1 unit. So yes I take your point.
You’re right.
He doesn’t seem to understand the difference between risk aversion and loss aversion.
https://x.com/edwin_teejay/status/1895098440352223563
I tried as an exercise to compute the result in Eq 3 using Ito’s lemma, but I managed to get it only if I assume that we should maximise log wealth. If we try to maximise wealth directly we don’t get any useful result, as the function is linear in x and after expectation and derivation wrt l we don’t have l itself.
So although maximising wealth and maximising log wealth are conceptually be equivalent, they’re not: we need to have a function with some level of convexity. This is similar to optimal control, when the hamiltonian is linear in the controls and we have either bang/bang solutions or singular arcs (in this case l would be the “static” control variable). Any deeper insights about this?
On a side note, there’s a very simple way to get to the same result. We’d like to maximise geometric mean return. Geomean return can be approximated as \mu – 1/2 \sigma^2 + h.o.t. (skew, kurtosis etc).
But \mu = SR*\sigma (sharpe ratio).
So max geomean -> max (SR*sigma – 1/2*sigma^2) wrt sigma -> max geomean when sigma_optimal = SR.
To get the leverage, we simply do target_sigma/sigma, from which we get optimal leverage = SR/sigma = mu/sigma^2
Thanks for sharing this really valuable work.
Pingback: Convex Strategies | Risk Update: May2025 – “Just Do It”
Pingback: Unknown Knowns – the key to the Risk Matrix - On Balance
Pingback: Things I’ve Enjoyed #236 & 237 – Long vol, short prediction models
Could somebody explain if/how is this related to Kelly? (i.e. capital log growth rate maximization)