Ergodicity Economics

You must be logged in to view this content.

Click here to login.

9 thoughts on “”

  1. Simon Blöthner

    Depending on which economist you ask, the reason for the shape of the utility function doesn’t really matter. It could be ,for all that matters, that a log transformation is similar to value theory, where we say that, generally, the value we assign to a good decreases with the amount that we have of it.

    1. Yes, I would imagine that that’s how it’s usually introduced. With EE, we take a narrower view. Assuming the nonlinearity of utility is a consequence purely of non-additivity of the dynamics, what can we predict? What constitutes optimal behavior etc. We’ve explored this theoretically quite a bit, and more recently people have started designing experiments to check in what regime the theory applies.
      There’s the Copenhagen experiment, that’s the behavioral part in Marleen’s talk,
      https://ergodicityeconomics.com/2024/02/12/3-1-maria-van-der-weij-testing-ergodic-decision-theory-in-the-human-brain/
      and also an experiment Arne Vanhoyweghen, Benjamin Skjold, and I have been putting together. A simpler one, cheaper and quicker to run (and with weaker signals). In that case we change the dynamic environment, and (at least in the pilot data) see a predicted change of an individual’s utility function towards more risk averse or more risk seeking behavior.
      Emilie shows the pilot data in her talk, https://ergodicityeconomics.com/2024/02/12/3-3-emilie-soysal-beyond-growth-rates/
      Early days for lab experiments, but it definitely looks like dynamics alone are an important determinant of utility functions. So the story is more structural, less individualist, than typically suggested or assumed.

      1. Simon Blöthner

        Well, that is how it is introduced in certain strands of literature. Not as much in Neoclassical Economics. It could be that an individual tends to react to the presence of a dynamic of said kind by adapting such a utility function. It could almost be as the as law-like as the “law” of decreasing marginal utility.

    2. Yes, I think that is a frustration I sometimes have, which is that the reason for something existing matters. It is part of the explanation, and seeking explanation, i would argue, is at the heart of science. Someone saying they don’t care “why”, always sounds very unscientific to me.

      1. Literally an ancient problem, and a good one. I think of it as Plato vs. Aristotle. Plato had this big-data approach to science. He’s supposed to have said that the job of astronomy is to find “the uniform and ordered motions by the assumption of which the apparent movements of the planets can be accounted for.” Aristotle was different. He wanted a physical mechanism that could drive the motions, something that would give him not just the “how” but also the “why” things up there move the way they do.

        Both got it wrong, which is probably a good lesson to keep in mind.

  2. Yiannis Koutelidakis

    Thank you for the talk Professor Peters. Wrt individual vs. ensemble averages: is there any parallel or connection between this and Simpson’s Paradox in statistics? In Simpson’s Paradox the difference in average between sub-group and total group is due to confounding variables and to how we split the sub-group into bits; in non-ergodic systems the difference is driven by the inherent dynamics. But is there any way to view non-ergodicity through lens of Simpson’s Paradox and try to identify what would be the confounder in this case or what is the splitting decision that leads to this outcome in this particular case?

    1. Thank you for your question. Simpson’s paradox has come up a few times in discussions, but we haven’t thought about it deeply. Maybe the following makes sense.

      Usually, in Simpson’s paradox, as you say, you have one big group, with a measurement (x,y) of two variables for each member of the big group. If you correlate the measurements, you find some value cor(x,y)=a. Then you split the group into N sub-groups 1, 2, 3…N, and if you correlate the data again but one sub-group at a time, you find a systematically different value in the sub-groups than in the overall group. For instance, you could have the opposite correlation in the subgroups, cor(x1,y1)=cor(x2,y2)=…=-a.

      I think you can make the link to ergodicity breaking if each sub-group of measurements comes from one system (individual) over time. So you now have N individuals, and you measure each individual T times. The (time-average) correlation for each individual may be -a, whereas the ensemble-average correlation (pooling all measurements over all times and individuals) may be +a.

      In this case, each individual is idiosyncratically different, and in a sense only explores a sub-space of all possible states. It would be the sort of ergodicity breaking discussed in Aaron Fisher’s talk. Maybe we should pose the question to him.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top