1 thought on “”

  1. Super nice, Jakob, thank you! I have so many comments on this, it’s like a tour of the textbook, in a sense.

    At 3:30 you mention the resemblance of log-normals and power laws. We have that in Chapter 8.1.6, on p.134. Indeed, the log-normal has a regime where it looks like a power law, with exponent -1.5. This regime sits around the expected value. In the far tail, of course, the two distributions look different. We ran a summer school project some years ago to check if empirical wealth distributions could actually be described by log-normals, and found that wasn’t the case — the power law really was the better fit.

    Of course RGBM produces exactly the right power law, i.e. as observed by Pareto in the late 19th century. The exponent in RGBM is tunable, which is rare for power-law generating mechanisms. But it’s not freely tunable — it’s related to the reallocation rate, and when that’s set to a realistic value, you obtain Pareto’s (many times reproduced) fitted values.

    You were asking about divergent inequality. GBM has divergent inequality; it sits just on the cusp between divergent and convergent. But it’s divergent.

    What you call the Matthew effect: RGBM with negative tau has that effect — it takes from the poor and gives to the rich. Especially in Colm’s talk, you can see the persistent stratification that emerges as a result (also in 9.2.4, p.156 and figure 9.4 on p.154. Statistically identical individuals, but they naturally separate into two groups (rich and poor) with a divergent gap between them.

    So, yes, amazing overlap here. This is an excellent example of bringing two different languages together.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top