Hide table of contents

This document explores and develops methods for forecasting extreme outcomes, such as the maximum of a sample of n independent and identically distributed random variables. I was inspired to write this by Jaime Sevilla’s recent post with research ideas in forecasting and, in particular, his suggestion to write an accessible introduction to the Fisher–Tippett–Gnedenko Theorem.

I’m very grateful to Jaime Sevilla for proposing this idea and for providing great feedback on a draft of this document.

Summary

The Fisher–Tippett–Gnedenko Theorem is similar to a central limit theorem, but for the maximum of random variables. Whereas central limit theorems tell us about what happens on average, the Fisher–Tippett–Gnedenko Theorem tells us what happens in extreme cases. This makes it especially useful in risk management, when we need to pay particular attention to worst case outcomes. It could be a useful tool for forecasting tail events.

This document introduces the theorem, describes the limiting probability distribution and provides a couple of examples to illustrate the use (and misuse!) of the Fisher–Tippett–Gnedenko Theorem for forecasting. In the process, I introduce a tool that computes the distribution of the maximum n iid random variables that follow a normal distribution centrally but with an (optional) right Pareto tail.

Summary:

  • The Fisher–Tippett–Gnedenko Theorem says (roughly) that if the maximum of n iid random variables—which is itself a random variable—converges as n grows to infinity, then it must converge to a generalised extreme value (GEV) distribution
  • Use cases:
    • When we have lots of data, we should try to fit our data to a GEV distribution since this is the distribution that the maximum should converge to (if it converges)
    • When we have subjective judgements about the distribution of the maximum (e.g. a 90% credible interval and median forecast), we can use these to determine parameters of a GEV distribution that fits these judgements
    • When we know or have subjective judgements about the distribution of the random variables we’re maximising over, the theorem can help us determine the distribution of the maximum of n such random variables for large n – but this can give very bad results when our assumptions / judgements are wrong
  • Limitations:
    • To get accurate forecasts about the maximum of n random variables based on the distribution of the underlying random variables, we need accurate judgements about the right tail of the underlying random variables because the maximum will very likely be drawn from the tail, especially as n gets large
    • Even for data that is very well described by a normal distribution for typical values, normality can break down at the tails and this can greatly affect the resulting forecasts
      • I use the example of human height: naively assuming normality underestimates how extreme the tallest and shortest humans are because height is “only” normally distributed up to 2-3 standard deviations around the mean
    • Modelling the tail separately (even with quite a crude model) can improve forecasts
  • This simple tool might be good enough for forecasting purposes in many cases
    • It assumes that the underlying r.v.s are iid and normally distributed up to k standard deviations above the mean and that there is a Pareto tail beyond this point
    • Inputs:
      • 90% CI for the underlying r.v.s
      • n (the number of samples of the underlying random variables)
      • k (the number of SDs above the mean at which the Pareto tail starts); set this high if you don’t want a Pareto tail
    • Output: cumulative distribution function, approximate probability density function and approximate expectation of the maximum of n samples of the underlying random variables
  • Request for feedback: I’m not an experienced forecaster and I don’t know what kind of information and tools would be most useful for forecasters. Let me know how this kind of work could be extended or adapted to be more useful!

I expect the time-poor reader to get most of the value from this document by reading the informal statement of the Fisher–Tippett–Gnedenko Theorem, the overview of the generalised extreme value distribution, and the shortest and tallest people in the world example, and then maybe making a copy and playing around with the tool for forecasting the maximum of n random variables that follow normal distributions with Pareto tails (consulting this as needed).

46

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since:

One thing I might recommend in a document like this is to make it clear up front with concrete real examples what the use case of this theorem is. You eventually mention something about forecasting extreme height, but I was a bit confused about that and some readers may not reach that. More generally, after a quick read I am still a bit unclear why I would want to use/know this concept.

For example, you could write something like “Suppose that you are trying to forecast [real thing of interest]. Some of the relevant variables at play here are [ABC]. A naive approach might be [x] or assume [y], but actually, according to this theorem, [____].”

Quick note: the Google Docs link you shared has suggestion privileges, which you might not want for a public doc.

Thank you for this. It's a useful contribution, and I upvoted it.

I'd be interested in some discussion about when we'd expect this mathematics to be materially useful, especially when compared with other hard elements of doing this sort of forecast.

Example: if I want to estimate the extent to which averting a gigatonne of greenhouse gas (GHG) emissions influences the probability of human extinction, I suspect that the Fisher-Tippett-Gnedenko theorem isn't very important (shout if you disagree). Other considerations (like: "have I considered all the roundabout/indirect ways that GHG emissions could influence the chance of human extinction?") are probably more important.

Curated and popular this week
Relevant opportunities