Hide table of contents

Currently, I'm pursuing a bachelor degree in Biological Sciences in order to become a researcher in the area of biorisk, because I was confident that humanity would stop causing tremendous amounts of suffering upon other animals and would assume a net positive value in the future. 

However, there was a nagging thought in the back of my head about the possibility that it would not do so, and I found this article suggesting that there is a real possibility that such horrible scenario might actually happen.

If there is indeed a very considerable chance that humanity will keep torturing animals at an ever growing scale, and thus keep having a negative net-value for an extremely large portion of its history, doesn't that mean that we should strive to make humanity more  likely to go extinct, not less?

46

2
3

Reactions

2
3
New Answer
New Comment

7 Answers sorted by

This is the question. I agree with finm that we should stay alive since: 1) we just might figure out a way to stop the mass suffering, and 2) we just might develop the intention to do something about it. 

To add on a third, point, I would say: 3) if humanity goes extinct, then there is a possibility that either: 

  • a) no other species capable of humanity's intelligence and empathy ever comes into being, whereas nature stays on, thus guaranteeing mass suffering until the end of the universe; or 
  • b) even if another another species like humanity (or humanity itself) emerges, that would require hundreds of millions of years, during which sentient beings would suffer.

So I'm of the belief that humanity should be kept alive, because it is the only—albeit small—specter of hope for sentient beings. Now, I am a bit more hopeful than you, simply because within the span of a mere 4000 years of civilization (which is a blink of an eye in the grand scheme of things), humanity has, in many places: 

  • recognized the evil of slavery, caste system, etc.; 
  • outlawed discrimination on the basis of race, ethnicity, sex; 
  • done away with the belief that war is "glorious"; 
  • even passed laws outlawing certain practices against animals (California's Proposition 12); 
  • actually tried to realize utopia (ex. French and Russian Revolutions, etc.) (even though they failed spectacularly)

Vive humanity! Well, of course we have done as much—if not much more—horrible things to each other and to animals, but ultimately...  upon whom else can we rest our hopes, my friend?

I agree that, right now, we're partly in the dark about whether the future will be good if humanity survives. But if humanity survives, and continues to commit moral crimes, then there will still be humans around to notice that problem. And I expect that those humans will be better informed about (i) ways to end those moral crimes, and (ii) the chance those efforts will eventually succeed.

If future efforts to end moral crimes succeed, then of course it would be a great mistake to go extinct before that point. But even for the information value of knowing more about the prospects for humans and animals (and everything else that matters), it seems well worth staying alive.

I'm not convinced that the chances that efforts to end factory farming will (by default) become more likely to succeed over time - what's your thinking behind this? Given the current trajectory of society (below), whilst I'm hopeful that is the case, it's far from what I would expect. For example, I can imagine the "defensive capabilities" of the actors trying to uphold factory farming improve at the same or faster rate relative to the capabilities of farmed animal advocates.

Additionally, I'm not sure that the information value about our future prospects, by the simple statement, outweighs the suffering of trillions of animals over coming decades. This feels like a statement that is easy for us to make as humans, who largely aren't subject to suffering as intense as faced by many farmed animals, but it might be different if we thought about this from behind a veil of ignorance where the likely outcome for a sentient being as a life of imprisonment and pain. 

3
finm
Thanks, I think both those points make sense. On the second point about value of information, the future for animals without humans would likely still be bad (because of wild animal suffering), and a future with humans could be less bad for animals (because we alleviate both wild and farmed animal suffering). So I don' think it's necessarily true that something as abstract as ‘a clearer picture of the future’ can't be worth the price of present animal suffering, since one of the upshots of learning that picture might be to choose to live on and reduce overall animal suffering over the long run. Although of course you could just be very sceptical that the information value alone would be enough to justify another ⩾ half-century of animal suffering (and it certainly shouldn't be used to excuse to wait around and not do things to urgently reduce that suffering). Though I don't know exactly what you're pointing at re “defensive capabilities” of factory farming. I also think I share your short-term (say, ⩽ 25-year) pessimism about farmed animals. But in the longer run, I think there are some reasons for hope (if alt proteins get much cheaper and better, if humans do eventually decide to move away from animal agriculture for roughly ethical reasons, despite the track record of activism so far). Of course there is a question of what to do if you are much more pessimistic even over the long-run for animal (or nonhuman) welfare. Even here, if “cause the end of human civilisation” were a serious option, I'd be very surprised if there weren't many other serious options available to end factory farming without also causing the worst calamity ever. (Don't mean to represent you as taking a stand on whether extinction would be good fwiw)

Some people make the argument that the difference in suffering between a worst-case scenario (s-risk) and a business-as-usual scenario, is likely much larger than the difference in suffering between a business-as-usual scenario and a future without humans. This suggests focusing on ways to reduce s-risks rather than increasing extinction risk.

A helpful comment from a while back: https://forum.effectivealtruism.org/posts/rRpDeniy9FBmAwMqr/arguments-for-why-preventing-human-extinction-is-wrong?commentId=fPcdCpAgsmTobjJRB

Personally, I suspect there's a lot of overlap between risk factors for extinction risk and risk factors for s-risks. In a world where extinction is a serious possibility, it's likely that there would be a lot of things that are very wrong, and these things could lead to even worse outcomes like s-risks or hyperexistential risks.

No, there is no way to be confident. 

I think humanity is intellectually on a trajectory towards greater concern for non-human animals. But this is not a reliable argument. Trajectories can reverse or stall, and most of the world is likely to remain, at best, indifferent to and complicit in the increasing suffering of farmed animals for decades to come. We could easily "lock in" our (fairly horrific) modern norms.

But I think we should probably still lean towards preventing human extinction. 

The main reason for this is the pursuit of convergent goals

It's just way harder to integrate pro-extinction actions into the other things that we care about and are trying to do as a movement. 

We care about making people and animals healthier and happier, avoiding mass suffering events / pandemics / global conflict, improving global institutions, and pursuing moral progress. There are many actions that can improve these metrics - reducing pandemic risk, making AI safer, supporting global development, preventing great power conflict - which also tend to reduce extinction risk. But there are very few things we can do that improve these metrics while increasing x-risk. 

Even if extinction itself would be positive expected value, trying to make humans go extinct is a bit all-or-nothing, and you probably won't ever be presented with a choice where x-risk is the only variable at play. Most of the things you can do that increase human x-risk at the margins also probably increase the chance of other bad things happening. This means that there are very few actions that you could take with a view towards increasing x-risk that are positive expected value.

I know this is hardly a rousing argument to inspire you in your career in biorisk, but I think it should at least help you guard against taking a stronger pro-extinction view. 

If humans go extinct, surely wild animal suffering will continue for another billion years on Earth - and there are a lot more wild animals than farmed animals. If we survive, we can continue the work of improving the lives of both farmed and wild animals.

Unfortunately it is not worth the risk of us spreading wild animals throughout the galaxies. Then there’s the fact we might torture digital beings.

Utilitarians aware of the cosmic endowment, at least, can take comfort in the fact that the prospect of quadrillions of animals suffering isn't even a feather in the scales. They shut up and multiply.

(Many others should also hope humanity doesn't go extinct soon, for various moral and empirical reasons. But the above point is often missed among people I know.)

I worry about this line of reasoning because it's ends-justify-the-means thinking.

Let's say billions of people were being tortured right now, and some longtermists wrote about how this isn't even a feather in the scales compared to the cosmic endowment. These longtermists would be accused of callously gambling billions of years on suffering on a theoretical idea. I can just imagine The Guardian's articles about how SBF's naive utilitarianism is alive and well in EA.

The difference between the scenario for animals and the scenario for humans is that the former is socially acceptable but the latter is not. There isn't a difference in the actual badness.

Separately, to engage with the utilitarian merits of your argument, my main skepticism is an unwillingness to go all-in on ideas which remain theoretical when the stakes are billions of years of torture. (For example, let's say we ignore factory farming, and then there's a still unknown consideration which prevents us or anyone else from accessing the cosmic endowment. That scares me.) Also, though I'm not a negative utilitarian, I think I take arguments for suffering-focused views more seriously than you might.

I'm skeptical that humans will ever realize the full cosmic endowment, and that even if we do, the future will be positive for most of the quintillions of beings involved.

First, as this video discusses, it may be difficult to spread beyond our own star system, because habitable planets may be few and far between. The prospect of finding a few habitable planets might not justify the expense of sending generation ships (even ones populated with digital minds) out into deep space to search for them. And since Earth will remain habitable for the next billion y... (read more)

Thanks for the comment, Zach. I upvoted it.

I fully endorse expected total hedonistic utilitarianism[1], but this does not imply any reduction in extinction risk is way more valuable than a reduction in nearterm suffering. I guess you want to make this case by making a comparison like the following:

  • If extinction risk is reduced in absolute terms by 10^-10, and the value of the future is 10^50 lives, then one would save 10^40 (= 10^(50 - 10)) lives.
  • However, animal welfare or global health and development interventions have an astronomically low impact compar
... (read more)

To answer your question very directly on the confidence of millions of years in the future, the answer I think is "no", because I don't think we can be reasonably confident and precise about any significant belief about the state of the universe millions of years into the future.[1] I'd note that the article you link isn't very convincing for someone who doesn't share the same premesis, though I can see it leading to 'nagging thoughts' as you put it.

Other ways to answer the latter question about human extinction could be:

  • That humanity is positive (if human moral value is taken be larger than the effect on animals)
  • That humanity is net-positive (if the total effect of humanity is positive, most likely because of belief that wild-animal suffering is even worse)
  • Option value, or the belief that humanity has the capacity to change (as others have stated)

In practice though, I think if you reach a point where you might consider it to be a moral course of action to make all of humanity extinct, perhaps consider this a modus tonens of the principles that brought you to that conclusion rather than as a logical consequence that you ought to believe and act on. (I see David made a similar comment basically at the same time)

  1. ^

    Some exceptions for phyisics especially outside of our lightcone yada yada, but I think for the class of beliefs (I used significant beliefs) that are similar to this question this holds

Comments2
Sorted by Click to highlight new comments since:

It only definitely follows from humans being net negative in expectation that we should try to make humans go extinct if you are both a full utilitarian and  "naive" about it, i.e. prepared to break usually sacrosanct moral rules when you personally judge that to be likely to have the best consequences, something which most utilitarians take to be likely to usually result in bad consequences and therefore to be discouraged.  Another way to describe 'make humanity more  likely to go extinct' is 'murder more people than all the worst dictators in history combined'. That is the sort of thing that is going to be look like a prime candidate for "do not do this, even if it has the best consequences' on non-utilitarian moral views. And it's also obviously breaking standard moral rules. 

I don't have a good answer to this, but I did read a blog post recently which might be relevant. In it, two philosophers summarize their paper which argues against drawing the conclusion that longtermists should hasten extinction rather then preventing it. (The instigation of their paper was this paper by Richard Pettigrew which argued that longtermism should be highly risk-averse. I realize that this is a slightly separate question, but the discussion seems relevant.) Hope this helps! 

Curated and popular this week
Relevant opportunities