It’s commonly held within the EA community that X-risks constitute the most pressing issue of our time, and that first order of business is preventing the extinction of humanity. EAs thus expend much of our effort and resources on things like preventing pandemics and ensuring the responsible development of AIs. There are arguments which suggest that this may not be the best use of our resources, such as the Person-Affecting view of population ethics; these arguments are addressed in works like the Precipice and the EA Forum post “Existential risk as common cause”. However, what truly frightens me is the prospect that the human race ought to go extinct, and that we are causing astronomical harm by fighting extinction.

    Most of the arguments that suggest this are fringe views, but just because they are unpopular does not mean they are false. Even if the chances of them being true are slim, the harm caused by us if they are true is so great that we must take them seriously and reduce our uncertainty as much as possible. However, addressing these arguments seems to be neglected within the EA community; the Global Priorities Institute is perfect for this sort of research problem, and yet has only released a single paper on the topic (“Do not go gentle: why the Asymmetry does not support anti-natalism” by Andreas Mogensen).

    To help address this, I have compiled a list of all plausible arguments I’ve found that suggest that saving humanity from extinction is morally wrong. This list may not be exhaustive, so please comment below if I’ve missed any. Hopefully our community can perform research to address these arguments and determine if safeguarding the human race truly is the best thing for us to do.
 

    1. Anti-natalism

    This is the view that to be brought into existence is inherently harmful; when parents give birth to a child, they are indirectly hurting said child by subjecting them to the harms of life. One of the most comprehensive defenses of this view is “Better Never to have Been” by David Benatar. The implication here is that by preventing human extinction, we allow the creation of potentially trillions of people, causing unimaginable harm.

    2. Negative Utilitarianism

    This is the view that, as utilitarians (or, more broadly, consequentialists), we ought to focus on preventing suffering and pain as opposed to cultivating joy and pleasure; making someone happy is all well and good, but if you cause them to suffer then the harm outweighs the good. This view can imply anti-natalism and is often grouped with it. If we prevent human extinction, then we are responsible for all the suffering endured by every future human who ever lives, which is significant.

    3. Argument from S-Risks

    S-Risks are a familiar concept in the EA community, defined as any scenario in which an astronomical amount of suffering is caused, potentially outweighing any benefit of existence. According to this argument, the human race threatens to create such scenarios, especially with more advanced AI and brain mapping technology, and for the sake of these suffering beings we ought to go extinct now and avoid the risk.

    4. Argument from “D-Risks”

    Short for “destruction risks”, I am coining this term to express a concept analogous to S-Risks. If an S-Risk is a scenario in which astronomical suffering is caused, then a D-Risk is a scenario in which astronomical destruction is caused. For example, if future humans were to develop a relativistic kill vehicle (a near-light-speed missile), we could use it to destroy entire planets that potentially harbor life (including Earth). According to this argument, we must again go extinct for the sake of these potentially destroyed lifeforms.

    These four arguments, I feel, are the most plausible and most in need of empirical and moral research to either build up or refute. These last two, however, are the most frequently cited by actual proponents of human extinction.

    5. Argument from Deep Ecology

    This is similar to the Argument from D-Risks, albeit more down to Earth (pun intended), and is the main stance of groups like the Voluntary Human Extinction Movement. Human civilization has already caused immense harm to the natural environment, and will likely not stop anytime soon. To prevent further damage to the ecosystem, we must allow our problematic species to go extinct.

    6. Retributivism

    This is simply the argument that humanity has done terrible things, and that we, as a species, deserve to go extinct as punishment. Atrocities that warrant this punishment include the Holocaust, slavery, and the World Wars.

 

    The purpose of this post is not to argue one way or the other, but simply to explore the possibility that we are on the wrong side of this issue. If the more common view is correct and human extinction is a bad thing, then the EA community need not change; if human extinction is, in fact, a good thing, then the EA community must undergo a radical shift in priorities. Given this possibility, we should make some effort to reduce our uncertainties.

Comments48
Sorted by Click to highlight new comments since:

I agree there should be more reflection (moral or factual) into the assumption that we should prioritize preventing human extinction. :)

That being said, we should emphasize that some of the risk factors for extinction also seem to be risk factors for more suffering and s-risks - which suggests that negative utilitarians as well as s-risk reducers wouldn't support shifting focus away from those dealing with those risk factors - unless there are better opportunities for impact. Examples of these risk factors include more conflict, polarization and the unsafe development of AI, especially without concern for cooperative aspects to prevent potential conflict between different AI or their operators.

Of course, this might not apply to all risk factors of extinction. Still, s-risk reducers and suffering reducers might think that it's bad to (intentionally or otherwise) act in a way that results in people trying to bring it about (see https://www.utilitarianism.com/nu/nufaq.html#3.2 ) which might raise the question of precisely how much emphasis to put on this as a community.

More considerations include whether other civilizations exist (e.g. aliens), and if so, how many. This also makes it unclear what antinatalism suggests. If the focus is on fewer births then we need to find out whether human civilization would increase or decrease the total number of births in the future compared to alternative scenarios where, e.g., aliens own the resources humans would have owned.

Also remember that an existential risk (x-risk) is a "risk of an existential catastrophe, i.e. one that threatens the destruction of humanity’s longterm potential". This means existential risks aren't the same as extinction risks. S-risks that destroy humanity's longterm potential are also x-risks.

An argument against advocating human extinction is that cosmic rescue missions might eventually be possible. If the future of posthuman civilization converges toward utilitarianism, and posthumanity becomes capable of expanding throughout and beyond the entire universe, it might be possible to intervene in far-flung regions of the multiverse and put an end to suffering there.

Excellent point. Playing devil's advocate, one might be skeptical that humanity is good enough to perform these "cosmic rescue missions", either out of cruelty/indifference or simply because we will never be advanced enough. Still, it's a good concept to keep in mind.

This question has been considered to some extent by people in the community already. Consider the following posts:

It's would also be worth pointing out that most people in this community who hold views that can be categorized as negative utilitarian or suffering-focused don't endorse bringing about human extinction, e.g.:

I am not claiming that these posts/articles have settled the debate, but I think any post on a sensitive topic like this would benefit from including such content.

Thanks for these resources!

In relation to purely suffering-focused views, I also argue here that people may sometimes jump to hasty conclusions about human extinction due to certain forms of misconceived (i.e. non-impartial) consequentialism, and argue (drawing on the linked resources) that an impartial approach would imply strong heuristics of cooperation and nonviolence.

Interesting, thank you for sharing. A lot of this debate centers around our interpretations of consequentialism.

The big one I can think of, which is related to some of the ones you mention, is leximin or strong enough prioritarianism. The worst off beings human persistence would cause to exist are likely to live net negative lives, possibly very strongly net negative lives if we persist long enough, and on theories like this, benefits to these beings (like preventing their lives) count for vastly more than benefits to better off beings (like by giving those beings good lives rather than no lives). I don’t endorse this view myself, but I think it is the argument that most appeals to me in my moods when I am most sympathetic to extinction. When I sort of inhabit a Tomasikian suffering empathy exercise, and imagine the desperation of the cries of the very worst off being from the future, calling back to me, I can be tempted to decide that rescuing this being in some way is most of what should matter to me.

What's a Tomasikian suffering empathy exercise? I'm not familiar with that term.

"Tomasikian" refers to the Effective Altruist blogger Brian Tomasik, who is known for pioneering an extremely bullet-biting version of "suffering-focused ethics" (roughly negative utilitarianism, though from my readings, he may also mix some preference satisfactionism and prioritarianism in as well). The suffering empathy exercises I'm referring to aren't really a specific thing, but more sort of the style he uses when writing about suffering to try to get people to understand his perspective on it. Usually this involves describing real world cases of extreme suffering, and trying to get people to see the desperation one would feel if they were actually experiencing it, and to take that seriously, and inadequacy of academic dismissals in the face of it. A sort of representative quote:

"Most people ignore worries about medical pain because it's far away. Several of my friends think I'm weird to be so parochial about reducing suffering and not take a more far-sighted view of my idealized moral values. They tend to shrug off pain, saying it's not so bad. They think it's extremely peculiar that I don't want to be open to changing my moral perspective and coming to realize that suffering isn't so important and that other things matter comparably. Perhaps others don't understand what it's like to be me. Morality is not an abstract, intellectual game, where I pick a viewpoint that seems comely and elegant to my sensibilities. Morality for me is about crying out at the horrors of the universe and pleading for them to stop. Sure, I enjoy intellectual debates, interesting ideas, and harmonious resolutions of conflicting intuitions, and I realize that if you're serious about reducing suffering, you do need to get into a lot of deep, recondite topics. But fundamentally it has to come back to suffering or else it's just brain masturbation while others are being tortured."

The relevant post:

https://reducing-suffering.org/the-horror-of-suffering/

Interesting, I'll have to look into that. Thanks for the clarification.

To add to the other comment, (to my knowledge) Brian Tomasik coined the terms s-risks and suffering-focused ethics, established foundational research into the problem of wild animal suffering, and had a part in co-founding two existing organizations that have a strong focus on reducing s-risks, i.e. the Center on Long-Term Risk (CLR) and the Center for Reducing Suffering (CRS).

Suffering-focused ethics refers to a broad set of moral views focused on preventing suffering (e.g. some Buddhist ethics might fall under this category). 

While Brian Tomasik's writings are written from a "suffering-focused perspective", most of them are in the form of in-depth analyses relevant to how to reduce suffering, rather than ethical theory - which makes the work possibly relevant even if someone isn't as suffering-focused as he is but has at least some concern for suffering. For the moral views themselves another researcher, Magnus Vinding, has written a book on suffering-focused ethics.

None of the researchers/research organizations I mention above endorse bringing about human extinction. In general, how to best reduce suffering is (rightfully, in my view) seen as quite complex in this community (as another comment hinted at).

Good clarifications, endorsed.

To be clear, I wasn't trying to imply that Tomasik supports extinction, just that, if I have to think about the strongest case against preventing it, it's the sort of Tomasik on my shoulder that is speaking loudest.

I don't endorse it, but a-risks could be added: the risks that future human space colonistion will kill or prevent appearance of alien civilizations.

Seems like a generalization of d-risks.

I would agree. Still, some of the other commenters have pointed out that alien civilizations can have interesting consequences for the Anti-natalist, Negative Utilitarian, and S-Risk arguments.

de-humanizing risk? digital x-risk? dolphin-takeover risk?

"destruction risks", as defined in the post.

Ah, right! 🤦‍♂️

Yes. Also l-risks should be added in the list of letter-risks: the risks that all life will go extinct, if humans continue to do what they do in ecology - and it is covered in section 5 of the post.

5. Argument from Deep Ecology

    This is similar to the Argument from D-Risks, albeit more down to Earth (pun intended), and is the main stance of groups like the Voluntary Human Extinction Movement. Human civilization has already caused immense harm to the natural environment, and will likely not stop anytime soon. To prevent further damage to the ecosystem, we must allow our problematic species to go extinct.

This seems inconsistent with anti-natalism and negative utilitarianism. If we ought to focus on preventing suffering, why shouldn't anti-natalism also apply to nature? It could be argued that reducing populations of wild animals is a good thing, since it would reduce the amount of suffering in nature, following the same line of reasoning as anti-natalism applied to humans.

Good point, some of these arguments do contradict one another. I suppose if human extinction really were a good thing, it would be because of one or a few of these arguments, not all of them.

Forgive me if what I'm about to suggest is implicit elsewhere, but let's look at what I see as a key premise: humans "being" is a good thing. It's easy to go from there to look instead why humans being may not be a good thing, but why not go the other direction? What if there is A) the being of something else, another form of life/consciousness/whatever, that is a "better being" than "humans being," and 2) the existence of humans is somehow prejudicial or detrimental to the existence and prosperity of that "better being?" This may be something already in existence or expected to be in existence, but with that premise, couldn't you argue that human extinction would itself be a good by perpetuating a better "being?" 

More succinctly put, what if getting humans out of the way gives rise to something better than humans? This can easily devolve into claims of racial superiority within humanity and other assorted BS, but I'm thinking of homo sapiens versus something different (maybe call them neo sapiens, borrowing from what I recall was a mediocre cartoon called Exosquad), either another step on the evolutionary ladder, Skynet, etc.).

That's a really good point, it's similar to but distinct from the argument from Deep Ecology. I may add it to the article.

Cool. Happy to expound, if useful.

2. Negative Utilitarianism

    This is the view that, as utilitarians (or, more broadly, consequentialists), we ought to focus on preventing suffering and pain as opposed to cultivating joy and pleasure; making someone happy is all well and good, but if you cause them to suffer then the harm outweighs the good. This view can imply anti-natalism and is often grouped with it. If we prevent human extinction, then we are responsible for all the suffering endured by every future human who ever lives, which is significant.

Taking that further

It might be that the suffering that would happen along the way to our achievement of pain-free, joyous existence will outweigh our gained benefits. Also, our struggle for such a joyous existence and the suffering that happened along the way might have been a waste because nonexistence is actually not that bad.

Moral presumption

It seems that an argument for moral presumption can be made against preventing extinction. We already know there is great suffering in the world. We do not yet know whether we can end suffering and create a joyous existence. Therefore, it might be more prudent to go extinct.

 

 

2. Negative Utilitarianism

    This is the view that, as utilitarians (or, more broadly, consequentialists), we ought to focus on preventing suffering and pain as opposed to cultivating joy and pleasure; making someone happy is all well and good, but if you cause them to suffer then the harm outweighs the good. This view can imply anti-natalism and is often grouped with it. If we prevent human extinction, then we are responsible for all the suffering endured by every future human who ever lives, which is significant.

    3. Argument from S-Risks

    S-Risks are a familiar concept in the EA community, defined as any scenario in which an astronomical amount of suffering is caused, potentially outweighing any benefit of existence. According to this argument, the human race threatens to create such scenarios, especially with more advanced AI and brain mapping technology, and for the sake of these suffering beings we ought to go extinct now and avoid the risk.

    4. Argument from “D-Risks”

    Short for “destruction risks”, I am coining this term to express a concept analogous to S-Risks. If an S-Risk is a scenario in which astronomical suffering is caused, then a D-Risk is a scenario in which astronomical destruction is caused. For example, if future humans were to develop a relativistic kill vehicle (a near-light-speed missile), we could use it to destroy entire planets that potentially harbor life (including Earth). According to this argument, we must again go extinct for the sake of these potentially destroyed lifeforms.

Counterargument that is relevant to all three

We already know that there are many species on Earth, and new ones are evolving all the time. If we let ourselves go extinct, in our absence, species will continue to evolve. It is possible that these species, whether non-human and/or new forms of humans, will evolve to live lives of even more suffering and destruction than we are currently experiencing. We already know that we can create net positive lives for individuals, so we could probably create a species that has virtually zero suffering in the future. Therefore, it is upon us to bring this about.

What's more, the fact that we have such self awareness to consider the possible utility of our own species going extinct might indicate that we are the species that is empowered to ensure that the existing human and nonhuman species, in addition to future species, will be ones that don't suffer. 

Maybe we could destroy all species and their capacity to evolve, thus avoiding the dilemma in the latter paragraph. But then we'd need to be certain that all other species are better off extinct.

We already know that we can create net positive lives for individuals

Do we know this? Thomas Ligotti would argue that even most well-off humans live in suffering, and it’s only through self-delusion that we think otherwise (not that I fully agree with him, but his case is surprisingly strong)

That is a good point. I was actually considering that when I was making my statement. I suspect self-delusion might be the core of the belief of many individuals who think their their lives are net positive. In order to adapt/avoid great emotional pain, humans might self-delude when faced with the question of whether their life is overall positive.

Even if it is not possible for human lives to be net positive, my first counterargument would still hold for two different reason. 

First, we'd still be able to improve the lives of other species.

Second, it would still be valuable to prevent much more negative lives that might happen if other kinds of humans were allowed to evolve in our absence. It might be difficult to ensure our extinction was permanent. If we took care to make ourselves extinct and that we somehow wouldn't come back, it's possible that within, say, a billion years the universe would change in such a way as to make the spark of life that would lead to humans happen again. Cosmological and extremely long processes might undo any precautions we took.

Alternatively, maybe different kinds of humans that would evolve in our absence would be more capable of having positive lives than we are. 

 

I don't think I am familiar with anything by Thomas Ligotti. I'll look into them.

Note, however, that (a) Ligotti isn't a philosopher himself, he just compiled some pessimistic outlooks, representing them the way he understood them, (b) his book is very dark and can be too depressing even for another pessimist. I mean, proceed with caution, take care of your mental well-being while getting acquainted with his writings, he's a reasonably competent pessimist but a renowned master of, for the lack of a better word, horror-like texts :)

Thank you for that reminder. As with many things in philosophy, this discussion can wander into some pretty dark territory, and it's important to take care of our mental health.

I read this post about Thomas Ligotti on LessWrong. So far, it wasn't that disconcerting for me. I think that because I read a lot of Stephen King novels and some other horror stories when I was a teenager, I would be able to read more of his thoughts without being disconcerted. 

If I ever find it worthwhile to look more into pessimistic views on existence, I will remember his name.

One possible “fun” implication of following this line of thought to its extreme conclusion would be that we should strive to stay alive and improve science to the point at which we are able to fully destroy the universe (maybe by purposefully paperclipping, or instigating vacuum decay?). Idk what to do with this thought, just think it’s interesting.

Side note: I love that "paperclipping" is a verb now.

That's an interesting way of looking at it. That view seems nihilistic and like it could lead to hedonism since if our only purpose is to make sure we completely destroy ourselves and the universe, nothing really matters.

I don’t think that would imply that nothing really matters, since reducing suffering and maximizing happiness (as well as good ol’ “care about other human beings while they live”) could still be valid sources of meaning. In fact, insuring that we do not become extinct too early would be extremely important to insure the best possible fate of the universe (that being a quick and painless destruction or whatever), so just doing what feels best at the moment probably would not be a great strategy for a True Believer in this hypothetical.

Great points. If you assume a negative utilitarian worldview, you can make strong arguments both for and against human extinction.

I'm a fairly devoted anti-natalist, and I have to say that this ethic has been misunderstood--deeply, woefully so. Firstly, it always has been universal, all sentience included (and so far, it seems that the more humans are born, the less other animals are born). Benatar only focused on our own species because there were reasons to discuss this area in particular. Secondly, human extinction is a purely theoretical scenario, and one that isn't even the core of anti-natalism.

The core is just the negative value of coming into existence. And the basis is suffering reduction by means of prevention. It isn't some cult hellbent on human extinction (even though some of such groups can emerge out there to parasitize on some actual ethics). We wouldn't feel as devastated by the prospects of extinction, but to assume that ending the existence of humanity is our obligation? No, this'd be too much. At least for a contemporary anti-natalist who stays true to the suffering-focused nature of the ethic and is a tad familiar with contemporary SFE research in general (as opposed to someone who is mostly into justifications for their everyday hate). There's this immense ocean of non-human suffering around us. And if not we, then who would take care of it?

Thank you for that clarification, I apologize if I misrepresented the movement as a whole. The main reason I listed anti-natalism was because you could argue from that perspective that stopping human extinction is bad, not that anti-natalism necessarily implies that. The same goes for virtually all of these arguments.

If you could push a button and all life in the universe would immediately, painlessly, and permanently halt, would you push it?

Would you cleanse all the universe with that utilitronium shockwave which is a no less relevant thought experimemt pertaining to CU?

Excellent question! I wouldn’t, but only because of epistemic humility—I would probably end up consulting with as many philosophers as possible and see how close we can come to a consensus decision regarding what to practically do with the button.

If it was just me (and maybe a few other similar-minded people) in the universe however, and if I was reasonably certain it would actually do what it said in the label, then I may very well press it. What about you, for the version I presented for your philosophy?

Curated and popular this week
Relevant opportunities