In talking with people about the long-term future, I've found it to be extremely helpful to give an estimate for the percent chance humanity goes extinct by 2100 (or in 100 years). Right now, I say ~10% or 5-19%, and then say something like "it would be really nice if we could get that number below 1%". My estimate is taken from these sources:
- FHI's casual 2008 survey of various x-risks. Taken together, they give a 19% chance by 2100.
- The 2007 Stern Review, a 700-page report on climate change. It uses 0.1% as an upper bound modeling assumption for annual extinction risk, which means 9.5% in the next 100 years (by 2107).
- This July 2018 Vox article from Liv Boeree that references the FHI study and also says "5 to 19 percent chance of complete human extinction by the end of this century". (I'm not sure where the 5% comes from?)
- This 2016 report on GCRs from FHI and the Global Priorities Project. They reference the two sources above and just say it's hard to create a reasonable estimate.
However, I find the strength of these estimates pretty weak. If someone were to ask me to "back up" my 10% number, the best I'd have is an informal survey circulated at an x-risk conference in 2008. So, a couple questions:
- Are there other sources that I'm missing?
- Do others also feel like it would be helpful to have an updated/more rigorous estimate here? (Or is it not actually helpful to operate at this level of abstraction? i.e. Should we concentrate just on individual sources of x-risk instead?)
- Is it even possible to create an estimate like this? Or is the range of uncertainty just too large, that we'd need to give an estimate like 2-59%? (Clearly, this gets more difficult the longer we try to project out. But can't we estimate it for 2050, 2075, or 2100?)
Thanks for your thoughts and help!
Thanks for posting this.
I don't think there are any other sources you're missing - at least, if you're missing them, I'm missing them too (and I work at FHI). I guess my overall feeling is these estimates are hard to make and necessarily imprecise: long-run large scale estimates (e.g. what was the likelihood of a nuclear exchange between the US and Russia between 1960 and 1970?) are still very hard to make ex post, leave alone ex ante.
One question might be how important further VoI is for particular questions. I guess the overall 'x risk chance' may have surprisingly small action relevance. The considerations about the relative importance of x-risk reduction seem to be fairly insensitive to 10^-1 or 10^-5 (at more extreme values, you might start having pascalian worries), and instead the discussion hinges on issues like tractability, pop ethics, etc.
Risk share seems more important (e.g. how much more worrying is AI than nuclear war?), yet these comparative judgements can be generally made in relative terms, without having to cash out the absolute values.
I think differences over that range matter a lot, both within a long-termist perspective and over a pluralist distribution across perspectives.
At the high end of that range the low-hanging fruit of x-risk reduction will also be very effective at saving the lives of already existing humans, maki... (read more)