The notes below are our attempt to re-develop economic theory from scratch, namely starting with the axiom that individuals optimize what happens to them over time, not what happens to them on average in a collection of parallel worlds. The latter, surprisingly, is the starting point of the currently dominant form of economic theory.

The new formalism is being developed at the London Mathematical Laboratory. The lectures were first presented at a winter school on complex systems in Chandigarh, India, as part of a collaboration between the Indian Institute of Science and Educational Research Mohali and the Santa Fe Institute. The lectures were given to a group of about 50 students and have since then been downloaded thousands of times. We intend to publish them as a book when they have fully matured.

Date | Size | Comment | |

2018-06-30 | 6.6MB (136 pages) | Download | Ch.1: discussion of random variable, stochastic processes, ergodicity. Ch.2: general mapping dynamics utility function (beyond Kelly, includes historical example of square-root Cramer utility). Ch.3: log-normals vs. power laws, sums of log-normals, random-energy model. Ch.4: New chapter. Re-allocating Geometric Brownian Motion (RGBM). Analytic solution of RGBM. Analysis of US wealth data. Ch.5: Applications of stochastic market efficiency: solution of the equity premium puzzle, central-bank interest rate setting, fraud detection, a theory of noise. New data analysis, including tests of predictions for SP500 total return, DAX, bitcoin, Bernie Madoff’s Ponzi scheme. |

2017-08-24 | 4.5MB (96 pages) | Download | Corrected bullet point 4, p.26. |

2017-07-12 | 4.5MB (96 pages) | Download | Harmonised notation. |

2017-04-18 | 4.5MB (96 pages) | Download | Fixed some references and typos. |

2017-03-04 | 4.5MB (96 pages) | Download |

Hi, is knowledge of elementary calculus-based probability (as in Bertsekas and Tsitsiklis, or the elementary Ross book) sufficient to grasp the notes?

LikeLiked by 1 person

Francis, thank you for your question, that’s very helpful. Next time we go through the notes we will make a list of any prerequisites.

We’re trying to keep the lecture notes fairly self-contained, and I will cautiously say yes, the books you mention should be sufficient. My sense is that too much prior knowledge is more likely to be problematic than too little. But please let us know how you’re getting on, especially if you get stuck somewhere and feel that something is missing.

LikeLike

Hello, I am working through these lecture notes and love them. I do not come from a math or physics background and I find this to be a great resource for understanding your published papers. I have a pedantic comment to make: in eq (13) you introduce tau as a ‘dummy variable indicating a specific round of a gamble’ but it is not noted upon until eq (30). Trivial, but it would be easier to understand if noted in the text when it is first introduced.

LikeLike

Hugh, many thanks! We’ve made a note and will correct this in the next update.

LikeLike

i read the first lecture…amazing.

LikeLike

i mean, the first chapter…;)

LikeLike

This research has a distinctive cybernetic flavour to it. For lots of different reasons. What are your thoughts?

LikeLike

Small error on pg. 41, Fig. 10: “The distribution denoted by the blue line has a

higher mean and a higher variance than the one in red” should be “lower” or switch red & blue

LikeLiked by 1 person

I just finished reading your lecture notes and I really enjoyed every second of it. Your insight is very refreshing and makes far more sense than the classical treatment of decision theory via utility functions. It is also very well written, I believe that every economist that reads this with an open mind will at least rethink a thing or two about the state of our discipline.

That being said, there is a typo on page 67 (it says “sigend” instead of “signed”). Also on page 120 the excess drift is defined as riskless minus risky instead of risky minus riskless.

LikeLiked by 1 person

Thanks for making your notes available.

I have an absolute beginner question (for some reason my brain never got to grasp probabilities…) about the coin tossing game at the start.

I got the time average thing, i.e. single player is going to have one’s wealth wiped out (I found that actually “intuitive”).

Now, with regard to the finite-ensemble average in Section 1.1.1., and the corresponding picture 2, when you talk about how the average wealth across the group evolves in “time”, i.e. playing multiple repetitions, you must be assuming that each time (the game is repeated) the initial conditions are reset.

LikeLike

Each trajectory starts at the same level at time zero. After that the same dynamic is applied as in Fig.3, and the trajectories are never reset again.

If you find that in any way surprising: great! That the ensemble average (expectation value) can gain while each trajectory loses is the key insight that’s missing from formal economics. It’s a simple mathematical fact — you see it in a trivial coin toss — but many find it counter-intuitive. You may find this video helpful https://youtu.be/LGqOH3sYmQA

LikeLike

Thanks for your time.

It seems at least my understanding of the rules of the game was correct.

Will rewrite my python script, I must have done something badly wrong! 🙂

So, if I setup N (say 10000) trajectories, i.e. parallel games, and after each toss I update the “wealth” of each player accordingly, then the finite-ensemble average at each time is simply the average of the wealth of the players at that time, right?

And this, as a function of t, should be sloping upward

LikeLike

Gotcha!

All good now (only God knows what I was coding today while multitasking at work…).

Results as expected.

It is very insightful though to extract also the max and min wealth across all realisations as that clarify everything (something you hinted at in the video): since the lower bound of the wealth for the most unlucky player cannot be less than zero, the more players you have the more “probable” it becomes to have “lucky bastards” that gets lots of heads!

Fascinating!

LikeLike

Those are some good lecture notes though i have difficulty trying to go through 136 pages —first part.

https://sciencehouse.wordpress.com also discusses jensen’s inequality and some economics but in a less detailed fashion.

I did ‘intuitively’ select the ‘wrong answer’–take the gamble for the coin flipping (first) example. But maybe you can excercize choice, be wise, and choose your trajectory. Be the ‘one in a million’.

While you disavow discussions of any moral or other perspectives on things like wealth or income inequality, and counterpose ‘time averages’ versus ‘ensemble averages’, i could take a different perspective, similar to the Wheeler-Feynman ‘one-electron universe’.

From that view , applied to economics, there is only 1 individual, but over its infinite lifespan (may have to undergo reincarnation) it does go through every possible economic trajectory. So in the ergodic limit (Poincare, Birkhoff, von Neumann, Weiner…) there is no inequality on average. What goes up must come down, so ocassionaly you feast, but usually are in a state of famine. ‘But for the grace of god, there go i’.You will be I later.

Concept of ergodicity i learned from old papers on FPU simulations, some by van Kampen, and more recent ones by Ruelle , Shalizi, Mackey, and others (none of which i fully understand).

There are also many ‘non-technical’ discussions of the concept in ‘heterodox’ economics papers, which you can find by doing a non/ergodic random walk through the blogosphere. https://arxiv.org/abs/cond-mat/0506338

LikeLike