In talking with people about the long-term future, I've found it to be extremely helpful to give an estimate for the percent chance humanity goes extinct by 2100 (or in 100 years). Right now, I say ~10% or 5-19%, and then say something like "it would be really nice if we could get that number below 1%". My estimate is taken from these sources:

  • FHI's casual 2008 survey of various x-risks. Taken together, they give a 19% chance by 2100.
  • The 2007 Stern Review, a 700-page report on climate change. It uses 0.1% as an upper bound modeling assumption for annual extinction risk, which means 9.5% in the next 100 years (by 2107).
  • This July 2018 Vox article from Liv Boeree that references the FHI study and also says "5 to 19 percent chance of complete human extinction by the end of this century". (I'm not sure where the 5% comes from?)
  • This 2016 report on GCRs from FHI and the Global Priorities Project. They reference the two sources above and just say it's hard to create a reasonable estimate.

However, I find the strength of these estimates pretty weak. If someone were to ask me to "back up" my 10% number, the best I'd have is an informal survey circulated at an x-risk conference in 2008. So, a couple questions:

  1. Are there other sources that I'm missing?
  2. Do others also feel like it would be helpful to have an updated/more rigorous estimate here? (Or is it not actually helpful to operate at this level of abstraction? i.e. Should we concentrate just on individual sources of x-risk instead?)
  3. Is it even possible to create an estimate like this? Or is the range of uncertainty just too large, that we'd need to give an estimate like 2-59%? (Clearly, this gets more difficult the longer we try to project out. But can't we estimate it for 2050, 2075, or 2100?)

Thanks for your thoughts and help!

Comments23
Sorted by Click to highlight new comments since:

Earlier this year Good Judgment superforecasters (in nonpublic data) gave a median probability of 2% that a state actor would make a nuclear weapon attack killing at least 1 person before January 1, 2021. Conditional on that happening they gave an 84% probability of 1-9 weapons detonating, 13% to 10-99, 2% to 100-999, and 1% to 100 or more.

Here's a survey of national security experts which gave a median 5% chance of a nuclear great power conflict killing at least 80 million people over 20 years, although some of the figures in the tables look questionable (mean less than half of median).

It's not clear how much one should trust these groups in this area. Over a longer time scale I would expect the numbers to be higher, since there is information that we are currently not in a Cold War (or hot war!), and various technological and geopolitical factors (e.g. the shift to multipolar military power and the rise of China) may drive it up.

Do you have private access to the Good Judgement data? I've been thinking before about how it would be good to get superforecasters to answer such questions but didn't know of a way to access the results of previous questions.

(Though there is the question of how much superforecasters' previous track record on short-term questions translates to success on longer-term questions.)

GJ results (as opposed Good Judgment Open) aren't public, but Open Phil has an account with them. This is from a batch of nuclear war probability questions I suggested that Open Phil commission to help assess nuclear risk interventions.

This is really cool, Carl. Thanks for sharing. Do superforecasters ever make judgments about other x-risks?

Not by default, but I hope to get more useful forecasts that are EA action-relevant in the future performed and published.

Hi Carl, is there any progress on this end in the past year? I'd be very interested to see x-risk relevant forecasts (currently working on a related project).

Shouldn't the 1% be "1000 or more"?

Incidentally, CSER's Simon Beard has a working paper just up looking at sources of evidence for probability assessments of difference Xrisks and GCRs, and the underlying methodologies. It may be useful for people thinking about the topic of this post (I also imagine he'd be pleased to get comments, as this will go to a peer reviewed publication in due course). http://eprints.lse.ac.uk/89506/1/Beard_Existential-Risk-Assessments_Accepted.pdf

We are indeed keen to get comments and feedback. Also note that the final 1/3rd or so of the paper is an extensive catalogue of assessments of the probability of different risks in which we try to incorporate all the sources we could find (though we are very happy if others know of more of these).

I will say however that the overwhelming sense I got in doing this study is that it is sometimes best not to put this kind of number on risks.

Hey Simon! Thanks writing up this paper. The final 1/3 is exactly what I was looking for!

Could you give us a bit more texture on why you think it's "best not to put this kind of number on risks"?

Hey Rhys

Thanks for prompting me on this. I was hoping to find time for a fuller reply to your but this will have to do, you only asked for the texture after all. My concerns are somewhat nebulous so please don't take this as any cast iron reason not to seek out estimates for the probability of different existential risks. However, I think they are important.

The first relates to the degree of uncertainty that surrounds any estimate of this kind and how it should be handled. There are actually several sources of this.

The first of these relates to the threshold for human extinction. We actually don't have very good models of how the human race might go extinct. Broadly speaking human beings are highly adaptable and we can of-course survive across an extremely wide range of habitats, at least with sufficient technology and planning. So roughly for human extinction to occur then a change must either be extremely profound (such as the destruction of the earth, our sun or the entire universe) very fast (such as a nuclear winter), something that can adapt to us (such as AGI or Aliens) or something that we chose not to adapt to (such as climate change). However, personally, I have a hard time even thinking about just what the limits of survivability might be. Now, it is relatively easy to cover this with a few simplifying assumptions. For instance that 10 degrees of climate change either-way would clearly represent an existential threat. However, these are only assumptions. Then there is the possibility that we will actually be more vulnerable to certain risks than it appears, for instance, that certain environmental changes might cause an irrevocable collapse in human civilization (or in the human microbiome if you are that way inclined. The Global Challenges Foundation used the concepts of 'infinite threshold' and 'infinite impact' to capture this kind of uncertainty, and I think they are useful concepts. However, they don't necaserilly speak to our concern to know about the probability of human extinction and x-risk, rather than that of potential x-risk triggers.

The other obvious source of uncertainty is the uncertainty about what will happen. This is more mundane in many ways, however when we are estimating the probability of an unprecedented event like this I think it is easy to understate the uncertainty inherent in such estimates, because there is simply so little data to contradict our main assumptions leading to overconfidence. The real issue with both of these however is not that uncertainty means that we should not put numerical values to the likelihood of anything, but that we are just incapable of dealing very well with numerical figures that are highly uncertain, especially where these are stated and debated in a public forum. Even if uncertainty ranges are presented, and they accurately reflect the degree of certainty the assessor can justifiably claim to have, they quickly get cut out with commentators preferring to focus on one simple figure, be it the mean, upper or lower bounds, to the exclusion of all else. This happens and we should not ignore the pitfalls it creates.

The second concern I have is about the context. In your post you mention the famous figure from the Stern review and this is a great example of what I mean. Stern came up with that figure for one reason, and one alone. He wanted to argue for the higher possible discount rate that he believed was ethically justified in order to give maximum credence to his conclusions (or if you are more cynical then perhaps 'he wanted to make it look like he was arguing for...'). However, he also thought that most economic arguments for discounting were not justified he was left with the conclusion that the only reason to prefer wellbeing today to wellbeing tomorrow was that there might be no tomorrow. His 0.1% chance of human extinction per year (Note that this is supposedly the 'background' rate by the way, it is definitely not the probability of a climate-induced extinction) was the highest figure he could propose that would not be taken as overly alarmest. If you think that sounds a bit off then reflect on the fact that the mortality rate in the UK at present is around 0.8%, so Stern was saying that one could expect more than 10% of human mortality in the near future to result from human extinction. I think that is not at all unreasonable, but I can see why he didn't want to put the background extinction risk any higher. Anyway, the key point here is that none of these considerations was really about producing any kind of estimate of the likelihood of human extinction, it was just a guess that he felt would be reasonably acceptable from the point of view of trying to push up the time discount rate a bit. However, of course, once it is out there it got used, and continues to get used, as if it was something quite different.

The third concern I have is that I think it can be at least somewhat problematic to break down exitential risks by threat, which people generally need to do if they are to assign probability estimates to them. To be fair, your are here interested in the probability of human extinction as a whole, which does not fact this particular problem. However many of the estimates that I have come across relate to specified threats. The issue here is that much of the damage from any particular threat comes from its systemic and cascading effects. For instance, when considering the existential threat from natural pandemics I am quite unconcerned that a naturally occurring (or even most man-made) pathogen might literally wipe out all of humanity, the selection pressures against that would be huge. I am somewhat more concerned that such a pandemic might cause a general breakdown in global order leading to massive global wars or the collapse of the global food supply. However, I am mostly concerned that a pandemic might cause a social collapse in a single state that possessed nuclear weapons leading to them becoming insecure. If I simply include this as the probability of either human extinction via pandemic or nuclear war then that seems to me to be misleading. However, if it got counted in both then this could lead to double counting later on. Of course, with great care and attention this sort of problem can be dealt with. However on the whole when people make assessments of the probability of existential risks they tend to pool together all the available information, much of which has been produced without any coordination making such double counting, or zero counting, not unlikely.

Please let me know if you would like me to try and write more about any of these issues (although to be honest I am currently quite stretched so this may have to wait a while). You may also be interested in a piece I Wrote with Peter Hurford and Catheryn Mercow (I won't lie, it was mostly they who wrote it) on how different EA organizations account for uncertainty which has had quite a bit of impact on my thinking http://effective-altruism.com/ea/193/how_do_ea_orgs_account_for_uncertainty_in_their/

Also if you haven't already seen it you might be interested in this piece by Eliezer Yudkowsky https://www.lesswrong.com/posts/AJ9dX59QXokZb35fk/when-not-to-use-probabilities

PS: Obviously these concerns have not yet lead me to give up on working with these kinds of estimates, and indeed I would like them to be made better in the future. However they still trouble me.

Perfect, thanks! I agree with most of your points (and just writing them here for my own understanding/others):

  • Uncertainty hard (long time scale, humans adaptable, risks systemically interdependent so we get zero or double counting)
  • Probabilities have incentives (e.g. Stern's discounting incentive)
  • Probabilities get simplified (0-10% can turn into 5% or 0% or 10%)

I'll ping you as I get closer to a editable draft of my book, so we can ensure I'm painting an appropriate picture. Thanks again!

Thanks for posting this.

I don't think there are any other sources you're missing - at least, if you're missing them, I'm missing them too (and I work at FHI). I guess my overall feeling is these estimates are hard to make and necessarily imprecise: long-run large scale estimates (e.g. what was the likelihood of a nuclear exchange between the US and Russia between 1960 and 1970?) are still very hard to make ex post, leave alone ex ante.

One question might be how important further VoI is for particular questions. I guess the overall 'x risk chance' may have surprisingly small action relevance. The considerations about the relative importance of x-risk reduction seem to be fairly insensitive to 10^-1 or 10^-5 (at more extreme values, you might start having pascalian worries), and instead the discussion hinges on issues like tractability, pop ethics, etc.

Risk share seems more important (e.g. how much more worrying is AI than nuclear war?), yet these comparative judgements can be generally made in relative terms, without having to cash out the absolute values.

The considerations about the relative importance of x-risk reduction seem to be fairly insensitive to 10^-1 or 10^-5 (at more extreme values, you might start having pascalian worries), and instead the discussion hinges on issues like tractability, pop ethics, etc.

I think differences over that range matter a lot, both within a long-termist perspective and over a pluralist distribution across perspectives.

At the high end of that range the low-hanging fruit of x-risk reduction will also be very effective at saving the lives of already existing humans, making them less dependent on concern for future generations.

At the low end, non-existential risk trajectory changes look more important within long-termist frame, or capacity building for later challenges.

Magnitude of risk also importantly goes into processes for allocating effort under moral uncertainty and moral pluralism.

Anders Sandberg's Flickr account has a 2014 photo of a whiteboard from FHI containing estimates for the following statements/questions:

  • Probability that >50% of humans will die in a disaster in next 100 years
  • Are we living in a computer simulation created by some advanced civilization?
  • Your credence that humanity goes extinct in the next 100 years – replacing us with something better (e.g. WBE) doesn't count
  • Your credence that AGI is developed by 2050 (on Earth)

The photo caption is:

Office guesses at (A) a disaster killing 50%+ of humanity in the next century, (B) our reality turning out to be a simulation, (C) extinction within a century, and (D) artificial general intelligence before 2050.

This is based on earlier Aumann agreement experiments we did. Credences are free to update as we see each other's views, as well as get new evidence.

There are two other photos showing parts of the same (or similar) whiteboard.

HT: Louis Francini for originally pointing me to these photos.

The staff at FHI thinks we're more likely to be a computer simulation than to die out in the next century?!

I have a post coming up soon on public opinion on this question, not that it tells us what a well-researched estimate would be. As a follow-up it could be worth investigating whether presenting people with a number (if any knowledgeable researchers can be persuaded to give one) is makes a difference compared with presenting a similar argument without a numerical risk estimate, in terms of people thinking the issue is important or supporting or taking an action to address it.

If we're really uncertain how to quantify the risks, why do we do it (in everyday conversation)?

-Gregory's comment suggests that precisely quantifying the risks doesn't matter for far future arguments.

-People could argue with the numbers we use rather than the actually logic - precisely defining a % chance of extinction could pull focus from more important arguments.

-Using numbers that seem precise (eg 10%) may signal a degree of certainty that we can't back up. Using numbers carelessly isn't something we want to be known for.

Perhaps the answer is to avoid quantifying the risk.

I am thoroughly shocked by these numbers and can't understand why they've not been more widely publicized- or perhaps everyone is aware of them apart from me! I've been an effective altruist for 4 years, but never realized that knowledgeable people had such seriously high estimates of the risk of total extinction over the next 100 years. If people have estimates like these (or as Gregory Lewis says within factors of 10000 of them)- then surely the logical conclusion is that all other causes pale in comparison and that all EAs should focus their efforts purely on x-risk. And yet judging by EA funds donations- the community sees it very differently.

There is more to the issue, you need to look at the chance that extinction can be avoided by a given donation. Otherwise it would be just like saying that we should donate against poverty because there are a billion poor people.

In Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks the author Phil Torres mentions (in addition to FHI's informal 2008 survey) that

"the philosopher John Leslie argues that we have a 30 percent chance of extinction in the next five centuries"

and that

"the cosmologist Martin Rees writes in a 2003 book that civilization has a 50-50 chance of surviving the present century."

The book also references Bulletin of the Atomic Scientists' Doomsday Clock, which is now (as of 2018) as close to "midnight" as it was in 1953.

don't forget the doomsday argument.

https://arxiv.org/abs/1705.08807 has a question about the probability that the outcome of AI will be "extremely bad."

Where in the Stern report are you looking?

The fixed 0.1% extinction risk is used as a discount rate in the Stern report. That closes the model to give finite values (instead of infinite benefits) after they exclude pure temporal preference discounting on ethical grounds. Unfortunately, the assumption of infinite confidence in a fixed extinction rate, gives very different (lower) expected values than a distribution that accounts for the possibility of extinction risks eventually becoming stably low for long periods (the Stern version gives a probability of less than 1 in 20,000 to civilization surviving another 10,000 years, when agriculture is already 10,000 years old).

Curated and popular this week
Relevant opportunities