You must be logged in to view this content.

Click here to login.

10 thoughts on “1-2 James Price. Extending EE: ruin and finite-time problems.”

  1. What does your research say about the two competing strategies for retiree portfolios with retirees drawing a “salary” from their portfolio?

    Strategies:

    1. Start with a lower risk portfolio allocation to reduce “sequence of returns” risk in early retirement and then increase the estimated risk and return of the allocation over time.
    2. Start with an “age-appropriate” portfolio allocation and risk and decrease the risk over time.

    1. Hi Bob,

      Thank you for your question. As part of the analysis for this project we considered the scenario of a random process with adaptive multiplicative noise and an absorbing barrier. Below a change-point wealth ‘c’, the random noise and drift were small, above the change-point wealth the noise and drift were larger. We found that for most stopping times an optimal ‘c’ did exist. This indicates that strategy 1 would be optimal with the caveat that our model described above is a realistic enough interpretation of the real-world.

  2. This is very nice, James. Thank you for your talk.

    Let me see if I can formulate my thoughts as a concrete question. In EE we often deal with stochastic processes, starting from the same point as you do. We look at a trajectory over a finite time, linearize, and measure the slope of the transformed variable. This slope is a properly defined growth rate, and it is, as you say, a random variable.

    Then the problem arises of valuing a random growth rate or even better: comparing two different random growth rates. This is the starting point both of EE and of classical economics. Because we want to compare and say something like A is better than B, we need a comparison operation that assigns the “better than” property to one of the two random numbers. This step is equivalent to an ordering, it’s like saying “random variable A is greater than random variable B” — which of course makes no sense. So before we say this, we have to turn the random variable, in one way or another, into a scalar because only scalars have the “greater than” property.

    There are many different ways of “collapsing” a random variable into a scalar, and these different ways correspond to different decision theories.

    I still think that the following two collapses correspond to two distinguished decision theories:
    i) replace the original stochastic process with its expected value (this corresponds to an imagined infinitely large ensemble of processes sharing resources).
    ii) consider the infinite-time growth rate of the process (this tells us what happens to a single system in the long run).
    The reason I think these collapses are special is that they correspond to the limits of two physical operations (pooling and sharing with infinitely many friends, and continuing the game indefinitely).

    Stochastic precedence is another way of turning the random variables (the growth rates) into scalars and comparing those. My question for you is: to what physical operation does the comparison of probabilities in stochastic precedence correspond? Answering this question will tell us in what situations the corresponding decision theory is called for, and that’s something I’d really like to know.

    I think this is a promising question because you can see how the two limits we often consider (infinite ensemble or infinite time) arise as limiting cases of this new decision theory. So perhaps it’s possible to construct this as a principled extension, as you suggest, of the null-models of EE.

    I don’t think the answer is trivial. For instance, imagine two process A and B. A is ruined in 40% of the trajectories and just sits at 100 in 60% of the trajectories. B sits at 99 in 80% of the trajectories and at 98 in 20% of the trajectories.
    The probability P(A>B) is 60% and the probability P(A < B) is 40%, so A is preferred to B. But it's clear that in many cases the 40% ruin probability is not acceptable and a better decision theory would prefer B to A. So the question I would ask is physics: what is the physical condition which makes stochastic precedence the appropriate decision theory?

    1. Hi Ole,

      I know we answered this question during the session but I thought I’d paraphrase my answer here.

      The main difference between the stochastic precedence approach compared to time/ensemble average is how processes are compared. For time/ensemble averages the random process is characterised by a scalar value and that value is compared between processes. In stochastic precedence the processes are compared and then characterised by a scalar value (the probability).

      Our reasoning for choosing stochastic precedence is a result of our interpretation of the long-term guarantee of EE for finite-time processes. EE is the study of decision making in a single universe, hence when simulating outcomes of our decisions we are only allowed to see one trajectory. It doesn’t matter if this trajectory is £1000 better than the alternative or £10, all that matters is whether it is better or not than the alternative. By this logic we want to maximise the probability of being better than the alternative which precisely the condition of stochastic precedence.

      Your example of stochastic precedence failing is very valid and questioning why it is valid leads to where the flaw in the logic in the previous paragraph is. In my mind this is linked to the idea of regret or risk (which was your answer to the question you posed above). For option A the risk of being ruined compared to a slight increase is too much, so perhaps our earlier interpretation of the long-term guarantee from standard EE is wrong. Perhaps the strength of the guarantee is that there is no long-term risk, in which case a risk minimising metric would be more suitable than stochastic precedence.

  3. Interesting talk! On one of your slides, you showed how depending on the time horizon t, the preference for or against strategies can change. Is it also possible to re-frame this as a problem of initial conditions for a fixed time horizon T: If you start below a threshold of X*, the safe strategy is the better choice (because you are already very close to the value 0, i.e. ruin), but above X*, the risky strategy is the better choice? Do you think there also exists such a behaviour?

    1. Hello Tobias,

      Thank you for your question. You are correct that now in both the ruin and finite-time problems the initial condition does have an effect. In the world of normally distributed noise there is a unique threshold wealth X*, this is because probability of ruin decreases exponentially as the initial wealth increases (at a linear rate).

  4. James

    A very interesting talk – thank you.

    Am I right in thinking that you have confined your attention to additive dynamics in the knowledge that you can extend the results you obtain to other dynamics that are mappable to additive dynamics? (I imagine the mapping should be a 1-1 increasing function to preserve wealth orderings.)

    If so, then this means you are extending the class of dynamics from those which are mappable to additive dynamics (as in https://arxiv.org/abs/1801.03680) to include dynamics with finite-time ruin and to allow for decisions based on finite-time horizons.

    This work is similar in spirit to that presented by Anthony Britto in 4-3, in that it seeks to expand the space of dynamics and decision criteria that can be handled in EE, presumably to allow the theory to describe a wider range of real scenarios. There may be some interesting overlaps to explore between your research.

    Best wishes
    Alex

    1. Hi Alex,

      Thank you for your question. You are correct about the reasoning behind only focusing on additive dynamics. Stochastic precedence has the convenient feature that P(X<Y)=P(logX<logY)=P(U(X)<U(Y)) for any one-to-one utility function U(x), which makes ergodicity transforms a simple generalisation of the additive regime.

      I've actually already spoken to Anthony about his transform estimation methods for exactly the reasons you mention. I think there are further links between both of our works and the reinforcement learning research that is featuring in EE2023. Trying to capture the essence of EE for a wider range of random processes, such as those in RL and actuarial science, is definitely not a simple problem and is a big motivation for the work I'm presenting here.

  5. Great talk, thank you this has been something I have been curious about.

    Couple of minor/trivial things.

    1. Just curious, is there a reason why when you write probabilities in the context of stochastic precedence the P is in blackboard bold? Does it distinguish something other than a typical probability?

    2. You mention how this approach involves calculating growth rates, and then forming stochastic precedence preferences, but can’t you do this just without growth rates, just by considering terminal wealths and calculating probabilities of inequalities on those?

    3. Just to be sure, in your final graph with stochastic precedence preferences, are those preferences evaluated for terminal wealth? (or are they somehow evaluated over the wealths along the trajectory?)

    1. Hi Ollie,

      Thank you for your questions.
      1. the blackboard style P is just my own preference for the typical probability symbol

      2. You’re correct, for finite-time processes we can just compare the distribution of wealth at the terminal time rather than growth rates. Additionally even for problems such as infinite-time ruin problems (that involve infinite wealth) we can move the limit from inside the probability operator to outside and still compare wealth. This approach also allows us to compare random processes that have different ergodicity transforms. For example we can compare additive processes against multiplicative processes by considering P(X<Y) where X is normal and Y is log-normal.

      3. The graph shows preferences evaluated for the terminal wealth. We can rewrite P(X<Y) = F(0) where F is the CDF of the difference distribution X-Y, which can be evaluated using numerical integration.

Leave a Comment

Your email address will not be published. Required fields are marked *