Formal economics without parallel universes

Watch

Network

,

What’s a growth rate, really?

by

Growth rates are at the heart of ergodicity economics, and economic news are full of them, too — “GDP grew by 3% last year,” something like that. Sometimes we also hear “national debt grew by $1,271,000,000,000 over the last year” (which is dimensionally different from 3\% per year). So since growth rates come in very different forms: what are they, really?

1 Growth rates are key

As we continue to re-develop economic theory by asking the ergodicity question, things are becoming simpler and clearer, and growth rates have emerged as key mathematical objects. Computing the right growth rate in a setting without uncertainty, it turns out, produces discounting. People who optimize the right growth rate behave as if they were discounting payments exponentially, or hyperbolically, or whichever way the dynamic dictates.

In the setting of decisions under uncertainty, optimizing appropriate growth rates can be mapped to Laplace’s expected utility theory, (which is worked out in Peters&Gell-Mann(2016) and inspired the Copenhagen experiment). The growth rate contains a function that we can identify with the utility function in Laplace’s theory (not in Bernoulli’s expected utility theory, which is inconsistent).

In other words: ergodicity economics unifies different branches of decision theory (including intertemporal discounting and expected utility theory) into one concept: growth rate maximization.

Hence the question:

How do we determine the appropriate growth rate for a given dynamic?

We all know at least two types of growth rates, probably more. Below, we’ll develop what a growth rate is, really, by going through two well-known examples, spotting similarities, and then generalizing.

Our first example of a growth rate is the simple rate of change.
\frac{1}{2}

(1) g_a=\frac{\Delta x}{\Delta t},

and our second example will be the exponential growth rate

(2) g_e=\frac{\Delta \ln x}{\Delta t}.

2 The growth rate for linear (additive) growth

We use Eq.(1) when something grows linearly, according to

(3) x(t)=x(0)+\gamma t.

The rate of change, Eq.1, is then a good growth rate whose value is \gamma. But how do we know that? What is it about the dynamic, Eq.(3), that makes Eq.(1) the “right” growth rate? Or put differently: why not state the exponential growth rate, Eq.(2), when someone asks how fast x is growing?

Answer: for the additive dynamic, Eq.(3), the growth rate g_a in Eq.(1) has a special and very useful property: it is independent of time — no matter at what t I measure Eq.(1), I always get g_a=\gamma. Because of this time-independence, evaluating Eq.(1) is a useful way to say how fast the process is.

lin
Fig. 1: For an additive dynamic, the growth rate is constant in time, in our case its value is \gamma. Whether I measure it a bit earlier or later — I get the same result.

Actually, let’s write the dynamic, Eq.(3), as a differential equation.

(4) dx = \gamma dt

or equivalently, because \gamma depends neither on t nor on x,

(5) dx = d (\gamma t)

This second way of writing tells us that the growth rate \gamma is really a sort of clock speed. There’s no difference between rescaling t and rescaling \gamma (by the same factor). Intriguing.

We make a mental note: the growth rate is a clock speed.

Let’s dig a little deeper here. What kind of clock speed are we talking about? What’s a clock speed anyway?

Or: what’s a clock? A clock is a process that we believe does something repeatedly at regular intervals. It lets us measure time by counting the repetitions. By convention, after 9,192,631,770 cycles of the radiation produced by the transition between two levels of the caesium 133 atom we say “one second has elapsed.” That’s just something we’ve agreed on. But any other thing that does something regularly would work as a clock – the Earth spinning around its axis etc.

When we say “the growth rate of the process is \gamma,” we mean that it advances \gamma units on the process-scale (here x) in one standard time unit (in finance we often choose one year as the unit, Earth going round the Sun). So it’s a conversion factor between the time scales of a standard clock and the process clock.

Of course, a clock is no good if it systematically speeds up or slows down. For processes other than additive growth we have to be quite careful before we can use them as clocks, i.e. before we can state their growth rates.

3 The growth rate for exponential (multiplicative) growth

Now what about the exponential growth rate, Eq.(2)? Let’s use what we’ve just learned and derive the process for which Eq.(2) is a good growth rate. We expect to find exponential growth.

We require that Eq.(2) yield a constant, let’s call that \gamma again, irrespective of when we measure it.

(6) g_e=\frac{\Delta \ln x}{\Delta t}=\gamma,

or

(7) \Delta \ln x = \gamma\Delta t,

or indeed, in differential form, and revealing that again the growth rate is a clock speed: \gamma plays the same role as t,

(8) d\ln x = d(\gamma t).

This differential equation can be directly integrated and has the solution

(9) \ln x(t)- \ln x(0)=\gamma t.

We solve for the dynamic x(t) by writing the log difference as a fraction

(10) \ln \left[\frac{x(t)}{x(0)}\right] = \gamma t ,

and exponentiating

(11) x(t)=x(0)\exp(\gamma t)

As expected, we find that the exponential growth rate, Eq.(2), is the appropriate growth rate (meaning time-independent) for exponential growth.

In terms of clocks, what just happened is this: we insisted that Eq.(2) be a good definition of a clock speed. That requires it to be constant, meaning that the process has to advance on the logarithmic scale, specified in Eq.(2), by the same amount in every time interval (measured on the standard clock, of course — Earth or caesium).

Fig. 2: For multiplicative (exponential) dynamics, the growth rate g_e:=\frac{\Delta \ln x}{\Delta t} is independent of time. It’s not the slope of x itself that’s constant but that of a non-linear function of (the logarithm).

4 The growth rate for general growth

Now let’s raise the stakes and assume less, namely only that x(t) grows according to a dynamic that can be written down as a separable differential equation. We could be even more general, but this is bad enough.

How do we define a growth rate now?

Well, we insist that the thing we’re measuring will be a time-independent rescaling of time, as before. We enforce this by writing down the dynamic in differential form, containing the growth rate as a time rescaling factor. Then we’ll work backwards and solve for g:

(12) dx=f(x) d(g t)

(for linear growth f(x) would just be f(x)=1, and for exponential growth it would be f(x)=x, but we’re leaving it general). We separate variables in Eq.(12) and integrate the differential equation

(13) \int_{x(t)}^{x(t+\Delta t)}\frac{1}{f(x)}dx= g \Delta t,

and we’ve got what we want, namely the functional form of g:

(14) g= \frac{ \int_{x(t)}^{x(t+\Delta t)}\frac{1}{f(x)}dx}{\Delta t}.

To make the equation a bit simpler, let’s give the definite integral a name, the letter u, so that

(15) \Delta u=\int_{x(t)}^{x(t+\Delta t)}\frac{1}{f(x)}dx.

Then we have

(16) g= \frac{ \Delta u[x(t)]}{\Delta t}

or

(17) \Delta u[x(t)]=g\Delta t.

Fig.3: More generally, there may be some transformation u[x(t)] whose increments change at a constant rate, i.e. at a rate that doesn’t depend on when I measure it. That rate of change is then the appropriate growth rate (allowing us to use the process as a clock).

5 The growth rate from the inverse of the dynamic

There’s a trick to find the growth rate for a given dynamic, without having to solve integrals.

Imagine the growth rate has the numerical value \gamma=1. To keep things clear we’ll introduce a subscript 1 for this rate-1 process:

(18) dx_1=f(x_1)dt.

The time that passes, in standard time units, between two levels of x_1 is then just \Delta u, i.e. we have \Delta u[x_1(t)]=\Delta t. That’s achieved if u(x_1) is the inverse of the dynamic, u(x_1)=x_1^{(-1)}[x_1(t)]=t.

We measure the growth rate by using the actual process as a clock (not the rate-1 process). We take the actual process, generated with whatever the value of the growth rate actually is, dx=f(x) d(\gamma t), we measure it at two different points in time, x(t) and x(t+\Delta t) (where time is defined by our standard clock, like that ^{133}{\text{Cs}} atom), invert it according to x_1^{(-1)}, and compare how much time has elapsed on the time scale of the process (which contains \gamma) to how much time has elapsed on our standard clock.

The result is the growth rate.

(19) g:=\frac{x_1^{(-1)}[x(t+\Delta t)]-x_1^{(-1)}[x(t)]}{\Delta t}=\gamma,

and we conclude that u(x)=x_1^{(-1)}[x(t)].

The required non-linear transformation is the inverse of the rate-1 process. Nice!

That makes total sense, of course: x(t) grows in some wild way, and we just want to know its clock speed \gamma. To find that, we have to get rid of the wild growth, i.e. we have to invert the growth — namely we have to undo how time would be transformed in the rate-1 case x_1(t).

Let’s quickly check this for the additive and multiplicative dynamics, and then try out a different growth to see that everything works out.

5.1 Growth rate as an inverse for additive dynamics

For additive dynamics, Eq.(3), we have the linear function x(t)=\gamma t. The inverse of the rate-1 process is x^{(-1)}[x_1(t)]=x: we’re just inverting the identity transformation. So we expect u(x) to be the identity function, which it is: comparing Eq.(16) to Eq.(1), we have u(x)=x. We’ve dropped x(0) here because it makes no differences to \Delta u.

5.2 Growth rate as an inverse for multiplicative dynamics

For multiplicative dynamics — as ever — it’s more interesting. The inverse of the rate-1 process for Eq.11 is u(x)=\ln \frac{x}{x(0)}. Again, it fits: Eq.(17) and Eq.(2) match if u(x)=\ln x (in the growth rate computation x(0) cancels out).

5.3 Growth rate as an inverse for square dynamics

Now a case that’s neither exponential (multiplicative) nor linear (additive).

(20) x(t)=(\gamma t)^2.

It’s a trivial example, but it shows the mechanics. Without any differential equations, we’ll just find the growth rate from the inverse function of the rate-1 process x_1(t). That’s

(21) u(x)=x_1^{(-1)}[x(t)]=x^{1/2}.

According to Eq.(16), g:=\frac{\Delta u(x)}{\Delta t},

which is

(22) \frac{x(t+\Delta t)^{1/2}-x(t)^{1/2}}{\Delta t}

=\frac{ [\gamma^2(t+\Delta t)^2]^{1/2}-[\gamma^2(t)^2]^{1/2}}{\Delta t}

=\gamma.

(Nerdy aside: Section 5 shows that the restriction to dynamics of the form dx=f(x)dt in Section 4 amounts to assuming that x(t) has a differentiable inverse.)

6 Why the letter u, and what the hell is going on?

Beautiful. It all works out. But doesn’t this remind you of something? Of course, a null model of human behavior must be that people maximize the growth rate of their wealth. That means they do different things, depending on the dynamics. Let’s fix \Delta t for a moment. Under additive dynamics they’ll then optimize \Delta x, under multiplicative dynamics they’ll optimize \Delta \ln x, and under general dynamics \Delta u(x).

So people optimize the change in a generally non-linear function of wealth… that’s utility theory, and that’s why we called the non-linear transformation u. Turns out, this has less to do with your idiosyncratic psychology and more to do with the dynamic to which your wealth is subjected.

I’ll leave the extension of this treatment to a random environment as an exercise. Hint: in a deterministic environment, growth rates are constant in time. In a random environment they are ergodic (that’s why at the London Mathematical Laboratory we don’t say “utility function” but “ergodicity mapping”).

You can read more about this in this paper with Alex Adamou: The time interpretation of expected utility theory. [2019-05-02 addendum: a nice example of a growth process that’s neither linear nor exponential is the body mass of organisms]

I thank Yonatan Berman and Alex Adamou for headache-inducing discussions about inverse functions and the like.

11 responses to “What’s a growth rate, really?”

  1. Eric Briys avatar
    Eric Briys

    Thanks Ole for your clarification work. Will read it carefully.

  2. Gabriel (@gabrielvc) avatar
    Gabriel (@gabrielvc)

    Hi there.

    Do you have a typo in eq 19? wouldn’t it be

    g:=\frac{x_1^{(-1)}[x(t+\Delta t)]-x_1^{(-1)}[x(t)]}{\Delta t}

    ?

    1. Ole Peters avatar
      Ole Peters

      You mean the subscript “1” was missing for the rate-1 process? Nice catch! Corrected that.

      1. Gabriel (@gabrielvc) avatar
        Gabriel (@gabrielvc)

        😉 Thanks!

        Regarding 5.1, I am a little bit lost. I’d say that:

        x^{-1}(x) = x / \gamma
        x_1^{-1}(x) = x / \gamma

        Therefore

        x^{-1} (x_1 (t)) = x_1(t) / \gamma = t / \gamma

        Or am I missing something here?

  3. Gabriel (@gabrielvc) avatar
    Gabriel (@gabrielvc)

    Sorry, I meant

    x^{-1}(x) = x / \gamma = t\gamma / \gamma = t
    x_1^{-1}(x) = x / 1 = t

  4. Andrew White avatar
    Andrew White

    Are you familiar with Tim Garrett’s work? I feel like there’s a connection with yours, I’m just not quite sure what it is yet…

    https://en.m.wikipedia.org/wiki/Garrett_relation#cite_ref-:7_18-0

  5. Array avatar
    Array

    My impression is that you are discussing a field you do not really know well. Ergodicity is something most quantitative economists learn, usually to proof strong laws of large numbers in strictly stationary processes (albeit there are milder ergodic laws anyway, e.g. using spectral theory), and just occasionally is used in deterministic dynamic models. But in general, this is more for phd students than serious work, because most problems in economics are stochastic but not ergodic, actually they are never stationary. So, this is not how we build serious theory, and these arguments have little or I dare to say zero interest for theoretical economists or empirical ones, except perhaps for students. Ergodicity unifies nothing. And yes, growth rates and its derivatives are often used because they are weakly stationary (but not strictly stationary, so we do not assume ergodicity but alternative conditoins–some of them ensure that you can use central limit theorems, etc.).
    In empirical context, Econometrics has a highly specialized branch devoted to the analysis of stochastic processes for the study of time series data (a share field with statistics and applied probability), and you need to devote some years to study it, as it has many branches.
    In the economic modeling subfields, dynamic or dynamic stochastic models can be defined in functional spaces, and some models use expected utilities to represent certain types of preferences over probability distributions defined in such spaces, but they do not need strict stationarity nor ergodicity to be well defined, to have a locally unique equilibrium, or to study its properties (including the asymptotic properties).

  6. Hansen Chen avatar
    Hansen Chen

    I am reading your paper Evaluating gambles using dynamics https://aip.scitation.org/doi/pdf/10.1063/1.4940236. You state under the section Multiplicative repetition that “Increments in the logarithm of x are now stationary and independent. The finite-time average of the rate
    of change in the logarithm of wealth, i.e., the exponential
    growth rate, converges to the expectation value of the rate of
    change in the logarithm with probability one,” Why? It is completely unclear whether this statement is an assumption or hypothesis rather than a logical conclusion derived from a more fundamental axiom. I suppose you pick this one as an axiom or assumption. One can just as well make the ergodicity assumption of any other functional form of the growth rate including the Additive repetition. No one form is more special than the other unless you can claim an axiom that is proved by empirical studies.

    I would appreciate it if you would address this question.

    1. Ole Peters avatar
      Ole Peters

      If I understand your comment correctly, then you’re exactly right. As this post illustrates, by assuming ergodicity for some functional form of a growth rate, you construct a given dynamic (or alternatively, by assuming a certain dynamic, you implicitly assume ergodicity of a certain functional form of the growth rate).

      Which functional form you would like to use to model your physical system is an empirical matter. Multiplicative dynamics are important where quantities grow in proportion to themselves, e.g. the value of a stock portfolio, or the biomass of lilies in the pond (before the pond gets too small). They’re not important where the quantity has no bearing on what happens next, for example the distance from the origin of a drifting Brownian particle is better modeled using additive dynamics.

  7. Mak avatar
    Mak

    I believe the link to the example involving the body mass of organisms is broken. Thanks a lot for writing this post!

  8. Mak avatar
    Mak

    I believe there is a minor typo on Sec. 5.1. In the 2nd sentence, the superscript “1” is associated with the wrong $x$—i.e., the inverse is from the rate-1 process not the actual process.

    I’m not entirely sure, but it seems that the trick provided on section 5 consists of rewriting the actual process as a rate-1 process by choosing the appropriate time scale. In particular, the examples on Sec.5 involve using a time $\tau$ such that $\tau = \gamma t$.

Leave a Reply

Your email address will not be published. Required fields are marked *

EE2023 Conference

Textbook