Hide table of contents

Abstract

Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation: focusing on cumulative risk rather than period risk; ignoring background risk; and neglecting population dynamics. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I use this discussion to draw four positive lessons for the study of existential risk: the importance of treating existential risk as an intergenerational coordination problem; a surprising dialectical flip in the relevance of background risk levels to the case for existential risk mitigation; renewed importance of population dynamics, including the dynamics of digital minds; and a novel form of the cluelessness challenge to longtermism.

Introduction

Suppose you are an altruist. You want to do as much good as possible with the resources available to you. What might you do? One option is to address pressing short-term challenges. For example, GiveWell (2021) estimates that $5,000 spent on bed nets could save a life from malaria today. 

Recently, a number of longtermists (Greaves and MacAskill 2021; MacAskill 2022b) have argued that you could do much more good by acting to mitigate existential risks: risks of existential catastrophes involving “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development” (Bostrom 2013, p. 15). For example, you might work to regulate chemical and biological weapons, or to reduce the threat of nuclear conflict (Bostrom and Cirkovi ´ c´ 2011; MacAskill 2022b; Ord 2020). 

Many authors argue that efforts to mitigate existential risk have enormous value. For example, Nick Bostrom (2013) argues that even on the most conservative assumptions, reducing existential risk by just one-millionth of one percentage point would be as valuable as saving a hundred million lives today. Similarly, Hilary Greaves and Will MacAskill (2021) estimate that early efforts to detect potentially lethal asteroid impacts in the 1980s and 1990s had an expected cost of just fourteen cents per life saved. If this is right, then perhaps an altruist should focus on existential risk mitigation over short term improvements.

There are many ways to push back here. Perhaps we might defend population-ethical assumptions such as neutrality (Naverson 1973; Frick 2017) that cut against the importance of creating happy people. Alternatively, perhaps we might introduce decision-theoretic assumptions such as risk aversion (Pettigrew 2022), ambiguity aversion (Buchak forthcoming) or anti-fanaticism (Monton 2019; Smith 2014) that tell against risky, ambiguous and low-probability gambles to prevent existential catastrophe. We might challenge assumptions about aggregation (Curran 2022; Heikkinen 2022), personal prerogatives (Unruh forthcoming), and rights used to build a deontic case for existential risk mitigation. We might discount the well-being of future people (Lloyd 2021; Mogensen 2022), or hold that pressing current duties, such as reparative duties (Cordelli 2016), take precedence over duties to promote far-future welfare.

These strategies set themselves a difficult task if they accept the longtermist’s framing on which existential risk mitigation is not simply better, but orders of magnitude better than competing short-termist interventions. Is it really so obvious that we should not save future lives at an expected cost of fourteen cents per life? While some moves, such as neutrality, may carry the day against even astronomical numbers, many of the moves on this list would be bolstered when joined with a competing maneuver: questioning the longtermist’s moral mathematics.

In this paper, I argue that many leading models of existential risk mitigation systematically neglect morally relevant considerations in determining the value of existential risk mitigation. This has two effects. First, debates about the value of existential risk mitigation are mislocated, because many of the most important parameters are neither modeled nor discussed. Second, the value of existential risk mitigation is inflated by many orders of magnitude. I look at three mistakes in the moral mathematics of existential risk: mishandling of cumulative risk (Section 3), background risk (Section 4), and population dynamics (Section 5). This will help us to gain a better understanding of the factors relevant to valuing existential risk mitigation. And under many assumptions, once these mistakes are corrected, the value of existential risk mitigation will be far from astronomical.

Reflecting on these mistakes in the moral mathematics of existential risk raises at least four classes of positive lessons for longtermism and the study of existential risk, discussed in Section 5. There, we will see the importance of treating existential risk mitigation as a difficult intergenerational coordination problem (Section 6.1); a surprising dialectical flip in the relevance of background risk levels to the case for existential risk mitigation (Section 6.2); renewed importance of population dynamics, including the demographics of digital minds (Section 6.3); and a novel form of the cluelessness challenge to longtermism (Section 6.4). But first, let us begin with some clarificatory remarks (Section 2).

Read the rest of the paper

48

0
0

Reactions

0
0

More posts like this

Comments14
Sorted by Click to highlight new comments since:

Hi David,

Thanks for sharing this.

My main reaction is that I was puzzled by the framing. It is obviously an allusion to Parfit's 'Five Mistakes in Moral Mathematics'. But there are major differences. Parfit was objecting to pieces of maths that are embedded in our common-sense understanding of morality, such as the share of the total view. He argued that the maths of morality is different to that. You are complaining about three modelling assumptions about the empirics of risk over time and population over time. You don't present any disagreement with the moral mathematics (which is just a big expected value calculation over the total wellbeing of all future people). And you don't even suggest that there were mistakes made in the modelling, just that they may rely on assumptions that weren't foregrounded and thus be misleading.

I liked the work you did on foregrounding those assumptions, but felt a bit let down by the framing. I  expected a Parfit-mark-II piece that showed how our commonsense understanding of the ethics of longtermism relied on mistaken moral assumptions, but instead found a piece that mainly just suggested different modelling assumptions (and in my view, assumptions that are more misleading than those in the pieces you critique). 

The framing also sounds to my ear to be a bit insulting to your peers. To some extent, any paper in moral philosophy where the author disagrees with another person could be reframed to be about a mistake in the other person's moral reasoning, but I'm glad authors don't typically choose that frame. Instead, they say that opponent said P, while this piece argues not P, or that the opponent assumed Q, while here are some reasons that Q is not a safe assumption. This keeps the focus on the content.

This is a lot of text re a piece's framing, but I wanted to lay it out because (at least for me) the framing really does distract from the content.

(I'll split further reactions on particular parts into separate comments)

Regarding the 'first mistake', you correctly show that survival of a species for a billion years requires reaching a low per-period level of risk (averaging roughly 1 in a billion per year). I don't disagree with that and I doubt Bostrom would either. No complex species has yet survived so long, but that is partly because there have been less than 1 billion years since complex life began. But there are species (or at least families) that have survived almost the whole time, such as the Nautilus (which has survived 500 million years). So risk levels comparable to 1 in a billion per year do occur. For Bostrom's modelling of the EV of risk reduction to work, he just needs there to be at least a small chance (say 1 in 1 million) that risk declines to such a level or beyond. That sounds eminently plausible to me, and my best guess of this probability would be much higher.

You say that: "there is a clear sense in which the drop in existential risk that Bostrom envisions is not small, but instead very large". But note that this is not the drop in existential risk that need be caused by the intervention Bostrom is evaluating. He relies on there being at least a slender possibility that risk levels fall to something like those of the safest species on Earth, but the intervention doesn't need to bring that about.

So on this 'first mistake', I agree that it is often also useful to think of things in per-period risk, and that this could provide a sanity check. But in this case, I think Bostrom's estimate passes that sanity check, so don't think he has made any kind of mistake here.

Regarding the 'second mistake', I don't see how it is very different from the first one. If there remains high average per-period risk, then the expected benefits of avoiding nearterm risk is indeed greatly lowered — from 'overwhelming' to just 'large'. In effect, it starts to approach the level of risk to currently existing people (which is sometimes argued to be so large already that we don't need to talk about future generations).

But it doesn't seem unreasonable to me for Millet and Snyder-Beattie to model things with an expected lifespan for humanity equal to that of a typical species. It is true that if risk stays high, then we won't get that, but risk staying high would be a more contentious assumption. And uncertainty about the final rate, tends to increase the expectation. e.g. If there was even a 1 in 400 chance that we last as long as the Nautilus, then that alone would make M & SB's assumption  an underestimate. Again, I can't see any 'mistake' here.

I was actually much more intrigued by your comment about a systematic overestimate due to an implicit assumption of independence between the variables they estimate. I'd have loved to see that developed instead.

There is also room for an interesting critique of EV of risk reduction as the best measure. Your arguments generally put pressure on the idea that the estimate of M & SB (or other people's duration estimates) are typical of the probability distribution. That is, they might be OK as estimates of the expectations (means), but they get much of that EV from the extreme tail of the distribution. And we might have Pascallian concerns about cases like that, where there is a decent case that we shouldn't compare prospects like this by their expectations.

'But risk staying high would be a more contentious assumption.'  Why? I take it this is really the heart of the disagreement, so it would be good to hear what makes you think this. 

I thought section 6.1 on 'Cumulative risk and intergenerational coordination' was very good. Many people (including those promoting action on existential risk) neglect how important it is that we get risk down and then keep it down. This is a necessary part of what I call existential security in my section of The Precipice devoted to our longterm strategy. And it is not easy to achieve. One strategy I talk about is implementing a constitution for humanity, committing future generations to work within their own diminishing share of a finite existential risk budget. 

I broadly agree with section 6.2 on dialectical flips, which is why I made broadly the same point in The Precipice (p 275):

"This is contrary to our intuitions, as people who estimate risk to be low typically use this as an argument against prioritizing work on existential risk."

I was thus a bit surprised to see my book cited in that section as something endorsing the Time of Perils, but not as something that had made the same point you are making about why high risk per-period risk reduces the EV of work on single-period risk reduction.

Re the third 'mistake', there is a long history of thinking that carrying capacity is a decent proxy for long term population. Is it a good proxy? Probably not in many situations. Is it better than extrapolating out the current growth dynamics for millions of years? Probably. My guess is that it is a simple defensible rough model here. And by laying out separate estimates for different scales being reached, there is also a pretty good sensitivity analysis. I think you are right that this could be improved by adding cases of permanent population collapse to the sensitivity analysis. But it won't change the EV much. So again, I wonder if a superior critique would be: these estimates are more or less correct in EV terms, but we should be suspicious of EV.

Thanks Toby! Comments much appreciated. 

This seems to be basically the same content as the posts that were shared here two days ago (here, here) with slightly less snark?

Yes, the posts are from David's blog and are an adaptation of this paper. I've added this post to my sequence as well.

There's a typo in the title of this post ("Three mistakes in the moral mathematics of existential risk")

Good catch Eevee - thanks! I hadn't caught this when proofreading the upload on the website. (Not our operations team's fault. They've been absolutely slammed with conference and event organizing recently, and I pushed them to rush this paper out so it would be available online).

Fixed. Thanks for flagging this!

It's also wrong on the website

Thanks for letting us know. Should be fixed now.

Curated and popular this week
Relevant opportunities