Hide table of contents

Summary: The most dangerous existential risks appear to be the ones that we only became aware of recently. As technology advances, new existential risks appear. Extrapolating this trend, there might exist even worse risks that we haven't discovered yet.

Epistemic status: From the inside view, I find the core argument compelling, but it involves sufficiently complicated considerations that I'm not more than 50% confident in its correctness. In this essay, if I claim that something is true, what I really mean is that it's true from the perspective of a particular argument, not necessarily that I believe it.

Cross-posted to my website.

Unknown existential risks

Humanity has existed for hundreds of thousands of years, and civilization for about ten thousand. During this time, as far as we can tell, humanity faced a relatively low probability of existential catastrophe (far less than a 1% chance per century). But more recently, while technological growth has offered great benefits, it has also introduced a new class of risks to the future of civilization, such as the possibilities of nuclear war and catastrophic climate change. And some existential risks have been conceived of but aren't yet possible, such as self-replicating nanotechnology and superintelligent AI.

In his book The Precipice, Toby Ord classifies existential risks into three categories: natural, anthropogenic, and future. He provides the following list of risks:

  • Natural Risks
    • Asteriods & Comets
    • Supervolcanic Eruptions
    • Stellar Explosions
  • Anthropogenic Risks
    • Nuclear Weapons
    • Climate Change
    • Environmental Damage
  • Future Risks
    • Pandemics
    • Unaligned Artificial Intelligence
    • Dystopian Scenarios

Additionally, he provides his subjective probability estimates that each type of event will occur and result in an existential catastrophe within the next century:

Existential catastrophe Chance within next 100 years
Asteroid or comet impact ∼ 1 in 1,000,000
Supervolcanic eruption ∼ 1 in 10,000
Stellar explosion ∼ 1 in 1,000,000,000
Nuclear war ∼ 1 in 1,000
Climate change ∼ 1 in 1,000
Other environmental damage ∼ 1 in 1,000
“Naturally” arising pandemics ∼ 1 in 10,000
Engineered pandemics ∼ 1 in 30
Unaligned artificial intelligence ∼ 1 in 10
Unforeseen anthropogenic risks ∼ 1 in 30
Other anthropogenic risks ∼ 1 in 50

Obviously, these estimates depend on complicated assumptions and people can reasonably disagree about the numbers, but I believe we can agree that they are at least qualitatively correct (e.g., asteroid/comet impacts pose relatively low existential risk, and engineered pandemics look relatively dangerous).

An interesting pattern emerges: the naturally-caused existential catastrophes have the lowest probability, anthropogenic causes appear riskier, and future causes look riskier still. We can also see that the more recently-discovered risks tend to pose a greater threat:

Imagine if the scientific establishment of 1930 had been asked to compile a list of the existential risks humanity would face over the following hundred years. They would have missed most of the risks covered in this book—especially the anthropogenic risks. [Footnote:] Nuclear weapons would not have made the list, as fission was only discovered in 1938. Nor would engineered pandemics, as genetic engineering was first demonstrated in the 1960s. The computer hadn’t yet been invented, and it wasn’t until the 1950s that the idea of artificial intelligence, and its associated risks, received serious discussion from scientists. The possibility of anthropogenic global warming can be traced back to 1896, but the hypothesis only began to receive support in the 1960s, and was only widely recognized as a risk in the 1980s.[1]

In other words:

  • Natural risks that have been present for all of civilization's history do not pose much threat.
  • Risks that only emerged in the 20th century appear more likely.
  • The likeliest risks are those that cannot occur with present-day technology, but might occur within the next century.

As technology improves, the probability of an existential catastrophe increases. If we extrapolate this trend, we can expect to discover even more dangerous risks that as-yet-unknown future technologies will enable. As Ord wrote, 100 years ago, the scientific community had not conceived of most of the risks that we would now consider the most significant. Perhaps in 100 years' time, technological advances will enable much more significant risks that we cannot think of today.

Or perhaps there exist existential risks that are possible today, but that we haven't yet considered. We developed nuclear weapons in 1945, but it was not until almost 40 years later that we realized their use could lead to a nuclear winter.[2] We might already have the power to cause an existential catastrophe via some mechanism not on Toby Ord's list; and that mechanism might be easier to trigger, or more likely to occur, than any of the ones we know about.

If we accept this line of reasoning, then looking only at known risks might lead us to substantially underestimate the probability of an existential catastrophe.

Even more worryingly, existential risk might continue increasing indefinitely until an existential catastrophe occurs. If technological growth enables greater risk, and technology continues improving, existential risk will continue increasing as well.[3] Improved technology can also help us reduce risk, and we can hope that the development of beneficial technologies will outpace that of harmful ones. But a naive extrapolation from history does not present an optimistic outlook.

Types of unknown risk

We can make a distinction between two types of unknown risk:

  1. Currently-possible risks that we haven't thought of
  2. Not-yet-possible risks that will become possible with future technology

The existence of the first type of risk leads us to conclude that we face a higher probability of imminent existential catastrophe than we might otherwise think. The second type doesn't affect our beliefs about existential risk in the near term, but it does suggest that we should be more concerned about x-risks over the next century or longer.

We shouldn't necessarily respond to these two types of unknown risks in the same way. For example: To deal with currently-possible unknown risks, we could spend more effort thinking about possible sources of risk, but this strategy probably wouldn't help us predict x-risks that depend on future technology.

Why unknown risks might not matter so much

In this section, I will present a few arguments that unknown risks don't matter as much as the previous reasoning might suggest (in no particular order). Of these new arguments, the only one I find compelling is the "AI matters most" argument, although this one involves sufficiently complex considerations that I do not feel confident about it.

Argument 1: AI matters most

We have some reason to expect superintelligent AI in particular to pose a greater risk than any unknown future technology. If we do develop superintelligent AI, humans will no longer be the most intelligent creatures on the planet. Intelligence has been the driving factor allowing humans to achieve dominance over the world's resources, so we can reasonably expect that a sufficiently intelligent AI would be able to gain control over humanity. If we no longer control our destiny, then on priors, we should not expect a particularly high probability that we realize a positive future[4].

Arguably, unknown risks cannot pose the same level of threat because they will not change who controls the future.

(On the other hand, there could conceivably exist some unknown but even more important consideration than who controls the future, and that if we thought of this consideration, we would realize that it matters more than superintelligent AI.)

A sufficiently powerful friendly AI might be able to fix all anthropogenic existential risks and maybe even some natural ones, reducing the probability of existential catastrophe to near zero—thus rendering unknown risks irrelevant. On the other hand, perhaps future technology will introduce some existential risks that not even a superintelligent AI can foresee or mitigate.

Argument 2: There are no unknown risks

Perhaps one could argue that we have already discovered the most important risks, and that our uncertainty only lies in how exactly those risks could lead to existential catastrophe. (For instance, severe climate change could result in unexpected outcomes that hurt civilization much more than we anticipated.) On the outside view, I tend to disagree with this argument, based on the fact that we have continued to discover new existential risks throughout the past century. But maybe upon further investigation, this argument would seem more compelling. Perhaps one could create a comprehensive taxonomy of plausible existential risks and show that the known risks fully cover the taxonomy[5].

Argument 3: New risks only look riskier due to bad priors

In general, the more recently we discovered a particular existential risk, the more probable it appears. I proposed that this pattern occurs because technological growth introduces increasingly-significant risks. But we have an alternative explanation: Perhaps all existential risks are unlikely, but the recently-discovered risks appear more likely due to bad priors plus wide error bars in our probability estimates.

Toby Ord alludes to this argument in his 80,000 Hours interview:

Perhaps you were thinking about asteroids and how big they were and how violent the impacts would be and think, “Well, that seems a very plausible way we could go extinct”. But then, once you update on the fact that we’ve lasted 2000 centuries without being hit by an asteroid or anything like that, that lowers the probability of those, whereas we don’t have a similar way of lowering the probabilities from other things, from just what they might appear to be at at first glance.

If we knew much less about the history of asteroid impacts, we might assign a much higher probability to an existential catastrophe due to asteroid impact. More generally, it's possible that we systematically assign way too high a prior probability to existential risks that aren't well-understood, and then end up revising these probabilities downward as we learn more.

If true, this line of reasoning means not only should we not worry about unknown risks, but we probably shouldn't even worry about known risks, because we systematically overestimate their likelihood.

Will MacAskill says something similar in "Are we living at the most influential time in history?", where he argues that we should assume a low base rate of existential catastrophe, and that we don't have good enough reason to expect the present to look substantially different than history.

I don't find this argument particularly compelling. Civilization today has unprecedented technology and globalization, so I do not believe most of human history serves as a useful reference class. Additionally, we have good reason to believe that certain types of existential catastrophes have a reasonably high probability of occurring. For example, we know that the United States and the Soviet Union have almost started a nuclear war more than once[6]. And while a nuclear war doesn't necessarily entail an existential catastrophe, it definitely makes such a catastrophe dramatically more likely.

(Strictly speaking, MacAskill does not necessarily claim that the current rate of existential risk is low, only that we do not live in a particularly influential time. His argument is consistent with the claim that existential risk increases over time, and will continue to increase in the future.)

Argument 4: Existential risk will decrease in the future due to deliberate efforts

If the importance of x-risk becomes more widely recognized and civilization devotes much more effort to it in the future, this would probably reduce risk across all causes. If we expect this to happen within the next 50-100 years, that would suggest we currently live in the most dangerous period. In previous centuries, we faced lower existential risk; in future centuries, we will take better efforts to reduce x-risk. And if we believe that unknown risks primarily come from future technologies, then by the time those technologies emerge, we will have stronger x-risk protection measures in place. (Toby Ord claims in his 80,000 Hours interview that this is the main reason why he didn't assign a higher probability to unknown risks.)

This argument seems reasonable, but it's not necessarily relevant to cause prioritization. If we expect that deliberate efforts to reduce x-risk will likely come into play before new and currently-unknown x-risks emerge, that doesn't mean we should deprioritize unknown risks. Our actions today to prioritize unknown risks could be the very reason that such risks will not seriously threaten future civilization.

Do we underestimate unknown x-risks?

  1. Toby Ord estimates that unforeseen risks have a 1 in 30 chance of causing an existential catastrophe in the next century. He gives the same probability to engineered pandemics, a higher probability to AI, and a lower probability to anything else.
  2. Pamlin & Armstrong (2015)[7] (p. 166) estimate a 0.1% chance of existential catastrophe due to "unknown consequences" in the next 100 years. They give unknown consequences an order of magnitude higher probability than any other risk, with the possible exception of AI[8].
  3. Rowe & Beard (2018)[9] provide a survey of existential risk estimates, and they only find one source (Pamlin & Armstrong) that considers unknown risks (Ord's book had not been published yet).

Based on these estimates, Pamlin & Armstrong appear to basically agree with the argument in this essay that unknown risks pose a greater threat than all known risks except possibly AI (although I believe they substantially underestimate the absolute probability of existential catastrophe). Ord appears to agree with the weaker claim that unknown risks matter a lot, but not that they matter more than all known risks. But based on Rowe & Beard's survey (as well as Michael Aird's database of existential risk estimates), no other sources appear to have addressed the likelihood of unknown x-risks, which implies that most others do not give unknown risks serious consideration. And there exists almost no published research on the issue.

Implications

If unknown risks do pose a greater threat than any known risk, this might substantially alter how we should allocate resources on mitigating existential risk, although it's not immediately clear what we should change. The most straightforward implication is that we should expend relatively more effort on improving general civilizational robustness, and less on mitigating particular known risks. But this might not matter much because (1) mitigating known risks appears more tractable and (2) the world already severely neglects known x-risks.

The Global Challenges Foundation has produced some content on unknown risks.[7:1][10] Pamlin & Armstrong[7:2] (p. 129) offer a few high-level ideas about how to mitigate unknown risks:

  1. Smart sensors and surveillance could detect many uncertain risks in the early stages, and allow researchers to grasp what is going on.

  2. Proper risk assessment in domains where uncertain risks are possible could cut down on the risk considerably.

  3. Global coordination would aid risk assessment and mitigation.

  4. Specific research into uncertain and unknown risks would increase our understanding of the risks involved.

On the subject of mitigating unknown risks, Toby Ord writes: "While we cannot directly work on them, they may still be lowered through our broader efforts to create a world that takes its future seriously" (p. 162).

In The Vulnerable World Hypothesis[11], Nick Bostrom addresses the possibility that "[s]cientific and technological progress might change people's capabilities or incentives in ways that would destabilize civilization." The paper includes some discussion of policy implications.

If we're concerned about unknown future risks, we could hold our money in a long-term existential risk fund that will invest the money to be used in future decades or centuries, and deploy it only when the probability of existential catastrophe is deemed sufficiently high.

But note that a higher probability of existential catastrophe doesn't necessarily mean we should expend more effort on reducing the probability. Yew-Kwang Ng (2016)[12] shows that, under certain assumptions, a higher probability of existential catastrophe decreases our willingness to work on reducing it. Much more could be said about this[13]; I only bring it up as a perspective worth considering.

In general, unknown risks look important and under-researched, but they do not offer any clear prescriptions on how to mitigate them. More work is required to better evaluate the probability of an existential catastrophe due to unknown risks, and to figure out what we can do about it.

Notes


  1. Ord, Toby. The Precipice (p. 162 and footnote 137 (p. 470)). Hachette Books. Kindle Edition. ↩︎

  2. Toby Ord on the precipice and humanity’s potential futures. 80,000 Hours Podcast. Relevant quote:

    [I]n the case of nuclear war, for example, nuclear winter hadn’t been thought of until 1982 and 83 and so that’s a case where we had nuclear weapons from 1945 and there was a lot of conversation about how they could cause the end of the world perhaps, but they hadn’t stumbled upon a mechanism that actually really was one that really could pose a threat. But I don’t think it was misguided to think that perhaps it could cause the end of humanity at those early times, even when they hadn’t stumbled across the correct mechanism yet.

    ↩︎
  3. Technology appears to be growing hyperbolically. If existential risk is roughly proportional to technology, that means we could rapidly approach a 100% probability of existential catastrophe as technological growth accelerates. ↩︎

  4. I have substantial doubts as to whether humanity will achieve a positive future if left to our own devices, but that's out of scope for this essay. ↩︎

  5. I tried to do this, and gave up after I came up with about a dozen categories of existential risk on the level of detail of "extinction via molecules that bind to the molecules in cellular machinery, rendering them useless" or "extinction via global loss of motivation to reproduce". Clearly we can find many risk categories at this level of detail, but also this gives far too little detail to actually explain the sources of risk or assess their probabilities. A proper taxonomy would require much greater detail, and would probably be intractably large. ↩︎

  6. I find it plausible that the Petrov incident had something like an 80% ex ante probability of leading to nuclear war. The anthropic principle means we can't necessarily treat the non-occurrence of an extinction event as evidence that the probability is low. (I say "can't necessarily" rather than "can't" because it is not clear that the anthropic principle is correct.) ↩︎

  7. Pamlin, Dennis & Armstrong, Stuart. (2015). 12 Risks that threaten human civilisation: The case for a new risk category. ↩︎ ↩︎ ↩︎

  8. Pamlin & Armstrong do not provide a single point of reference for all of their probability estimates, so I have produced one here for readers' convenience. When providing a probability estimate, they give a point estimate, except for AI, where they provide a range, because "Artificial Intelligence is the global risk where least is known." For three existential risks, they decline to provide an estimate.

    | Extreme climate change | 0.005% | | Nuclear war | 0.005% | | Global pandemic | 0.0001% | | Ecological catastrophe | N/A | | Global system collapse | N/A | | Asteroid impact | 0.00013% | | Super-volcano | 0.00003% | | Synthetic biology | 0.01% | | Nanotechnology | 0.01% | | AI | 0-10% | | Unknown consequences | 0.1% | | Bad global governance | N/A | ↩︎

  9. Rowe, Thomas and Beard, Simon (2018) Probabilities, methodologies and the evidence base in existential risk assessments. Working paper, Centre for the Study of Existential Risk, Cambridge, UK. ↩︎

  10. Tzezana, Roey. Unknown risks. ↩︎

  11. Bostrom, N. (2019), The Vulnerable World Hypothesis. Glob Policy, 10: 455-476. doi:10.1111/1758-5899.12718 ↩︎

  12. Ng, Y.‐K. (2016), The Importance of Global Extinction in Climate Change Policy. Glob Policy, 7: 315-322. doi:10.1111/1758-5899.12318 ↩︎

  13. As one counter-point: In Ng's model, a fixed investment reduces the probability of extinction by a fixed proportion (that is, it reduces the probability from P to aP for some constant a < 1). But I find it likely that we can reduce existential risk more rapidly than that when it's higher.

    As a second counter-point, Ng's model assumes that a one-time investment can permanently reduce existential risk, which also seems questionable to me (although not obviously wrong). ↩︎

Comments11
Sorted by Click to highlight new comments since:

One possibility is that there aren't many risks that are truly unknown, in the sense that they fall outside of the categories Toby enumerates, for the simple reason that some of those categories are relatively broad, so cover much of the space of possible risks.

Even if that were true, there might still be (fine-grained) risks we haven't thought about within those categories, however - e.g. new ways in which AI could cause an existential catastrophe.

Great post!

But based on Rowe & Beard's survey (as well as Michael Aird's database of existential risk estimates), no other sources appear to have addressed the likelihood of unknown x-risks, which implies that most others do not give unknown risks serious consideration.

I don't think this is true. The Doomsday Argument literature (Carter, Leslie, Gott etc.) mostly considers the probability of extinction independently of any specific risks, so these authors' estimates implicitly involve an assessment of unknown risks. Lots of this writing was before there were well-developed cases for specific risks. Indeed, the Doomsday literature seems to have inspired Leslie, and then Bostrom, to start seriously considering specific risks.

Leslie explicitly considers unknown risks (p.146, End of the World):

Finally, we may well run a severe risk from something-we-know-not-what: something of which we can say only that it would come as a nasty surprise like the Antarctic ozone hole and that, again like the ozone hole, it would be a consequence of technological advances.

As does Bostrom (2002):

We need a catch-all category. It would be foolish to be confident that we have already imagined and anticipated all significant risks. Future technological or scientific developments may very well reveal novel ways of destroying the world.

I agree that the literature on the Doomsday Argument involves an implicit assessment of unknown risks, in the sense that any residual probability mass assigned to existential risk after deducting the known x-risks must fall under the unknown risks category. (Note that our object-level assessment of specific risks may cause us to update our prior general risk estimates derived from the Doomsday Argument.)

Still, Michael's argument is not based on anthropic considerations, but on extrapolation from the rate of x-risk discovery. These are two very different reasons for revising our estimates of unknown x-risks, so it's important to keep them separate. (I don't think we disagree; I just thought this was worth highlighting.)

Related to this, I find anthropic reasoning pretty suspect, and I don't think we have a good enough grasp on how to reason about anthropics to draw any strong conclusions about it. The same could be said about choices of priors, e.g., MacAskill vs. Ord where the answer to "are we living at the most influential time in history?" completely hinges on the choice of prior, but we don't really know the best way to pick a prior. This seems related to anthropic reasoning in that the Doomsday Argument depends on using a certain type of prior distribution over the number of humans who will ever live. My general impression is that we as a society don't know enough about this kind of thing (and I personally know hardly anything about it). However, it's possible that some people have correctly figured out the "philosophy of priors" and that knowledge just hasn't fully propagated yet.

Thanks — I agree with this, and should have made clearer that I didn't see my comment as undermining the thrust of Michael's argument, which I find quite convincing.

Thanks for this perspective! I've heard of the Doomsday Argument but I haven't read the literature. My understanding was that the majority belief is that the Doomsday Argument is wrong, we just haven't figured out why it's wrong. I didn't realize there was substantial literature on the problem, so I will need to do some reading!

I think it is still accurate to claim that very few sources have considered the probability of unknown risks relative to known risks. I'm mainly basing this off the Rowe & Beard literature review, which is pretty comprehensive AFAIK. Leslie and Bostrom discuss unknown risks, but without addressing their relative probabilities (at least Bostrom doesn't, I don't have access to Leslie's book right now). If you know of any sources that address this that Rowe & Beard didn't cover, I'd be happy to hear about them.

Note that it's not just the Doomsday Argument that may give one reason for revising one's x-risk estimates beyond what is suggested by object-level analyses of specific risks. See this post by Robin Hanson for an enumeration of the relevant types of arguments.

I am puzzled that these arguments do not seem to be influencing many of the most cited x-risk estimates (e.g. Toby's). Is this because these arguments are thought to be clearly flawed? Or is it because people feel they just don't know how to think about them, and so they simply ignore them? I would like to see more "reasoning transparency" about these issues.

It's also worth noting that some of these "speculative" arguments would provide reason for revising not only overall x-risks estimates, but also estimates of specific risks. For example, as Katja Grace and Greg Lewis have noted, a misaligned intelligence explosion cannot be the Great Filter. Accordingly, insofar as the Great Filter influences one's x-risk estimates, one should shift probability mass away from AI risk and toward risks that are plausible Great Filters.

In line with Matthew's comment, think it's true that several sources discuss the possibility of unknown risks or discuss the total risk level (which should presumably include unknown risks). But I'm also not aware of any sources apart from Ord and Pamlin & Armstrong which give quantitative estimates of unknown risk specifically. (If someone knows of any, please add them to my database!)

I'm also not actually sure if I know of any other sources which even provide relatively specific qualitative statements about the probability of unknown risks causing existential catastrophe. I wouldn't count the statements from Leslie and Bostrom, for example.

So I think Michael's claim is fair, at least if we interpret it as "no other sources appear to have clearly addressed the likelihood of unknown x-risks in particular, which implies that most others do not give unknown risks serious consideration."

I agree with all the points you make in the "Implications" section. I also provide some relevant ideas in Some thoughts on Toby Ord’s existential risk estimates, in the section on how Ord's estimates (especially the relatively high estimates for "other" and "unforeseen" risks) should update our career and donation decisions.

To deal with currently-possible unknown risks, we could spend more effort thinking about possible sources of risk, but this strategy probably wouldn't help us predict x-risks that depend on future technology.

I was confused by this claim. Wouldn't you put risks from AI, risks from nanotechnology, and much of biorisk in the category of "x-risks that depend on future technology", rather than currently possibly yet unrecognised risks? And wouldn't you say that effort thinking about possible sources of risk has helped us identify and mitigate (or strategise about how to mitigate) those three risks? If so, I'd guess that similar efforts could help us with similar types of risks in future, as well.

If we expect that deliberate efforts to reduce x-risk will likely come into play before new and currently-unknown x-risks emerge, that doesn't mean we should deprioritize unknown risks. Our actions today to prioritize unknown risks could be the very reason that such risks will not seriously threaten future civilization.

I agree. I'd also add that I think a useful way to think about this is in terms of endogenous vs exogenous increases in deliberate efforts to reduce x-risk, where the former essentially means "caused by people like us, because they thought about arguments like this", and the latter essentially means "caused by other people or other events" (e.g., mainstream governments and academics coming to prioritise x-risk more for reasons other than the influence of EA). The more we expect exogenous increases in deliberate efforts to reduce x-risk, the less we should prioritise unknown risks. The same is not true with regards to endogenous increases, because deprioritising this area makes the endogenous increases unlikely.

(This distinction came to my mind due to SjirH making a similar distinction in relation to learning about giving opportunities.)

This is similar to discussions about how "sane" society is, and how much we should expect problems to be solved "by default".

We developed nuclear weapons in 1945, but it was not until almost 40 years later that we realized their use could lead to a nuclear winter.

You seem to be using this data point to support the argument "There are many risks we've discovered only recently, and we should be worried about these and about risks we have yet to discover." That seems fair.

But I think that, if you're using that data point in that way, it's probably worth also more explicitly noting the following: People were worried about existential catastrophe from nuclear war via mechanisms that we now know don't really warrant concern (as alluded to in the Ord quote you also provide). So there seems to be a companion data point which supports the argument "In the past, many risks that were recently discovered later turned out to not be big deals, so we should be less worried about recently discovered or to-be-discovered risks than we might otherwise think."

(And I'd say that both data points provide relatively weak evidence for their respective arguments.)

Thanks for this post - I think this is a very important topic! I largely agree that this argument deserves substantial weight, and that we should probably think more about unknown existential risks and about how we should respond to the possibility of them.

In general, the more recently we discovered a particular existential risk, the more probable it appears. I proposed that this pattern occurs because technological growth introduces increasingly-significant risks. But we have an alternative explanation: Perhaps all existential risks are unlikely, but the recently-discovered risks appear more likely due to bad priors plus wide error bars in our probability estimates.

(I'm not sure if the following is disagreeing with you or just saying something separate yet consistent)

I agree that recently discovered risks seem the largest source of existential risk, and that two plausible explanations of that are "technological growth introduces increasingly significant risks" and "we have bad priors in general and wider error bars for probability estimates about recently discovered risks than about risks we discovered earlier". But it also seems to me that we can believe all that and yet still think there's a fairly high chance we'll later realise that the risks we recently discovered are less risky than we thought, and that future unknown risks in general aren't very risky, even if we didn't have bad priors.

Essentially, this is because, if we have a few decades or centuries without an existential catastrophe (and especially if this happens despite the development of highly advanced AI, nanotechnology, and biotech), that'll represent new evidence not just about those particular risks, but also about how risky advanced technologies are in general and how resilient to major changes civilization is in general.

Thousands of years where asteroids could've wiped humanity out but didn't updates us towards thinking extinction risk from asteroids is low. I think that, in the same way, developing more things in the reference class of "very advanced tech with hard-to-predict consequences" without an existential catastrophe occurring should update us towards thinking existential risk from developing more things like that is relatively low.

I think we can make that update in 2070 (or whatever) without saying our beliefs in 2020 were foolish, given what we knew. And I don't think we should make that update yet, as we haven't yet seen that evidence of "general civilizational stability" (or something like that).

Does that make sense?

Admin: The "Pamlin & Armstrong (2015)" link was broken — I updated it.

Curated and popular this week
Relevant opportunities