by [anonymous]
Mar 9 20177 min read 14

11

TL;DR: One way to justify support for causes which mainly promise near-term but not far future benefits, such as global development and animal welfare, is the ‘intuition of neutrality’: adding possible future people with positive welfare does not add value to the world. Most people who endorse claims like this also endorse ‘the asymmetry’: adding possible future people with negative welfare subtracts value from the world. However, asymmetric neutralist views are under significant pressure to accept that steering the long-run future is overwhelmingly important. In short, given some plausible additional premises, these views are practically similar to negative utilitarianism.

 

[Edit: to clarify, I'm not endorsing the asymmetry or the intuition of neutrality. New update: To clarify further, I actually think these views are very implausible, but it's worth getting clear on what they imply. The practical implications arguably count strongly against them]

1. Neutrality and the asymmetry

Disagreements about population ethics – how to value populations of different sizes and realised at different times – appear to drive a significant portion of disagreements about cause selection among effective altruists.[1] Those who believe that that the far future has extremely large value tend to move away from spending their time and money on cause areas that don’t promise significant long-term benefits, such as global poverty reduction and animal welfare promotion. In contrast, people who put greater weight on the current generation tend to support these cause areas.

One of the most natural ways to ground this weighting is the ‘intuition of neutrality’:

Intuition of neutrality – Adding future possible people with positive welfare does not make the world better.

One could ground this in a ‘person-affecting theory’. Such theories, like all others in population ethics, have many counterintuitive implications.

Most proponents of what I’ll call neutralist theories also endorse ‘the asymmetry’ between future bad lives and future good lives:

The asymmetry – Adding future possible people with positive welfare does not make the world better, but adding future possible people with negative welfare makes the world worse.

The intuition behind the asymmetry is obvious: we should not, when making decisions today ignore, say, possible people born in 100 years’ time who live in constant agony. (It isn’t clear whether the asymmetry has any justification beyond this intuition. The justifiability of the asymmetry continues to be a source of philosophical disagreement.)

Here, I’m going to figure out what asymmetric neutralist theories imply for cause selection. I’ll argue that asymmetric neutralist theories are under significant pressure to be aggregative and temporally neutral about future bad lives. They are therefore under significant pressure to accept that affecting the far future is overwhelmingly important.

 

2. What should asymmetric neutralist theories say about future bad lives?

The weight asymmetric neutralist theories give to lives with future negative welfare will determine the theories’ practical implications. So, what should the weight be? I’ll explore this by looking at what I call Asymmetric Neutralist Utilitarianism (ANU).

Call lives with net suffering over pleasure ‘bad lives’. It seems plausible that ANU should say that bad lives have non-diminishing disvalue across persons and across time. More technically, it should endorse additive aggregation across future bad lives, and be temporally neutral about the weighting of these lives. (We should substitute ‘sentient life’ for ‘people’ in this, but it’s a bit clunky). 

Impartial treatment of future bad lives, regardless of when they occur

It’s plausible that future people suffering the same amount should count equally regardless of when those lives occur. Suppose that Gavin suffers a life of agony at -100 welfare in the year 2200, and that Stacey also has -100 welfare in the year 2600. It seems wrong to say that merely because Stacey’s suffering happens later, it should count less than Gavin’s. This seems to violate an important principle of impartiality. It is true that many people believe that partiality is often permitted, but this is usually towards people we know, rather than to strangers who are not yet born. Discounting using pure time preference at, say, 1% per year entails that the suffering of people born 100 years into the future is a small fraction of the value of people born 500 years into the future. This looks hard to justify. We should be willing to sacrifice a small amount of value today in order to prevent massive future suffering.

The badness of future bad lives adds up and is non-diminishing as the population increases

It’s plausible that future suffering should aggregate and have non-diminishing disvalue across persons. Consider two states of affairs involving possible future people:

              A. Vic lives at -100 welfare.

              B. Vic and Bob each live at -100 welfare. 

It seems that ANU ought to say that B is twice as bad as A. The reason for this is that the badness of suffering adds up across persons. In general, it is plausible that N people living at –x welfare is N times as bad as 1 person living at –x. It just does not seem plausible that suffering has diminishing marginal disutility across persons: even if there are one trillion others living in misery, that doesn’t make it any way less bad to add a new suffering person. We can understand why resources like money might have diminishing utility for a person, but it is difficult to see why suffering across persons behaves in the same way.

 

3. Reasons to think there will be an extremely large number of expected bad lives in the future

There is an extremely large number of expected (very) bad lives in the future. This could come from four sources:

1.      Bad future human lives

There are probably lots of bad human lives at the moment: adults suffering rare and painful diseases or prolonged and persistent unipolar depression, or children in low income countries suffering and then dying. It’s likely that poverty and illness-caused bad lives will fall a lot in the next 100 years as incomes rise and health improves. It’s less clear whether there will be vast and rapid reductions in depression over the next 100 years and beyond because, unlike health and money, this doesn’t appear to be a major policy priority even in high income countries, and it’s only weakly affected by health and money.[2] The arrival of machine superintelligence could arguably prevent a lot of human suffering in the future. But since the future is so long, even a very low error rate at preventing bad lives would imply a truly massive number of future bad lives. It seems unreasonable to be certain that the error rate would be sufficiently low.

2.      Wild animal suffering

It’s controversial whether there is a preponderance of suffering over pleasure among mild animals. It’s not controversial that there is a massive number of bad wild animal lives. According to Oscar Horta, the overwhelming majority of animals die shortly after coming into existence, after starving or being eaten alive. It seems reasonable to expect there to be at least a 1% chance that billions of animals will suffer horribly beyond 2100. Machine superintelligence could help, but preventing wild animal suffering is much harder than preventing human suffering and it is less probable that wild animal suffering prevention will be in the value of function of an AI than human suffering prevention: if we put the goals into the AI or it learns our values, since most people don’t care about wild animal suffering, neither would the AI. Again, even a low error rate would imply massive future wild animal suffering.

3.      Sentient AI

It’s plausible that we will eventually be able to create sentient machines. If so, there is a non-negligible probability that someone will in the far future, by accident or design, create a large number of suffering machines.

4.      Suffering on other planets

There are probably sentient life forms in other galaxies that are suffering. It’s plausibly in our power to reach these life forms and prevent them suffering, over very long timeframes.

 

The practical upshot

Since ANU only counts future bad lives and there are lots of them in the future, ANU + some plausible premises implies that the far future is astronomically bad. This is a swamping concern for ANU: if we have even the slightest chance of preventing all future bad lives occurring, that should take precedence over anything we could plausibly achieve for the current generation. It’s equivalent to a tiny chance of destroying a massive torture factory.

It’s not completely straightforward figuring out the practical implications of ANU. It’s tempting to say that it implies that the expected value of a miniscule increase in existential risk to all sentient life is astronomical. This is not necessarily true. An increase in existential risk might also deprive people of superior future opportunities to prevent future bad lives.

Example

Suppose that Basil could perform action A, which increases the risk of immediate extinction to all sentient life by 1%. However, we know that if we don’t perform A, in 100 years’ time, Manuel will perform action B, which increases the risk of immediate extinction to all sentient life by 50%.

From the point of view of ANU, Basil should not perform A even though it increases the risk of immediate extinction to all sentient life: doing this might not be the best way to prevent the massive number of future bad lives. 

It might be argued that most people cannot in fact have much influence on the chance that future bad lives occur, so they should instead devote their time to things they can affect, such as global poverty. This argument seems to work equally well against total utilitarians who work on existential risk reduction, so those who accept the former should also accept the latter.   

 

[Thanks to Stefan Schubert, Michael Plant, and Michelle Hutchinson for v. handy comments.]

 



[1] I’m not sure how much.

[2] The WHO projects that depressive disorders will be the number two leading cause of DALYs in 2030. Also, DALYs understate the health burden of depression.

Comments14
Sorted by Click to highlight new comments since: Today at 10:00 PM

Thanks for your post! I agree that work on preventing risks of future suffering is highly valuable.

It’s tempting to say that it implies that the expected value of a miniscule increase in existential risk to all sentient life is astronomical.

Even if the future is negative according to your values, there are strong reasons not to increase existential risk. This would be extremely uncooperative towards other value systems, and there are many good reasons to be nice to other value systems. It is better to pull the rope sideways by working to improve the future (i.e. reducing risks of astronomical suffering) conditional on there being a future.

In addition, I think it makes sense for utilitarians to adopt a quasi-deontological rule against using violence, regardless of whether one is a classical utilitarian or suffering-focused. This obviously prohibits something like increasing risks of extinction.

Great post!

I think that suffering focussed altruists should not try to increase existential risks due it being extremely uncooperative, because of the possibility of preventing large amounts of suffering in the future and also for reasons of moral uncertainty.

If you’re interested in reducing as much suffering as possible, you might like to get in touch with us at the Foundational Research Institute. Our mission is to reduce risks of astronomical suffering, or "s-risks."

Also, chances of actually sterilizing the biosphere are extremely tiny, which means you're simply ensuring wild animal suffering for the rest of earth's viability at least. Other planets are additionally potentially major terms in the suffering equation.

Thanks for the post. I agree that those who embrace the asymmetry should be concerned about risks of future suffering.

I would guess that few EAs have a pure time preference for the short term. Rather, I suspect that most short-term-focused EAs are uncertain of the tractability of far-future work (due to long, complex, hard-to-predict causal chains), and some (such as a coalition within my own moral parliament) may be risk-averse. You're right that these considerations also apply to non-suffering-focused utilitarians.

It’s tempting to say that it implies that the expected value of a miniscule increase in existential risk to all sentient life is astronomical.

As you mention, there are complexities that need to be accounted for. For example, one should think about how catastrophic risks (almost all of which would not cause human extinction) would affect the trajectory of the far future.

It's much easier to get people behind not spreading astronomical amounts of suffering in the future than behind eliminating all current humans, so a more moderate approach is probably better. (Of course, it's also difficult to steer humanity's future trajectory in ways that ensure that suffering-averting measures are actually carried out.)

Just to add to this, in my anecdotal experience, it seems like the most common argument amongst EAs for not focusing on X-risk or the far future is risk aversion.

[anonymous]7y2
0
0

Thanks for this. It'd be interesting if there were survey evidence on this. Some anecdotal stuff the other way... On the EA funds page, Beckstead mentions person-affecting views as one of the reasons that one might not go into far future causes (https://app.effectivealtruism.org/funds/far-future). Some Givewell staffers apparently endorse person-affecting views and avoid the far future stuff on that basis - http://blog.givewell.org/2016/03/10/march-2016-open-thread/#comment-939058.

Great post John. I don't think I'd seen the long-term implications of the neutrality intuition pointed out elsewhere. Most people who endorse it seem to think it permits focusing on the present, which I agree isn't correct.

"Adding future possible people with positive welfare does not make the world better."

I find that claim ridiculous. How could giving the gift of a joyful life have zero value?

[anonymous]7y7
0
0

Yes I agree, but many people apparently do not.

Found this post again after many months. Don't those who endorse the asymmetry tend to think neutrality is 'greedy' in the sense that if you add a mix of happy and unhappy lives, such that future total welfare is positive, then the outcome has zero value? Your approach is the 'non-greedy' one where happy lives never contribute towards outcome value and unhappy lives always count against. On the greedy approach, I think it follows we have no reason to worry about the future unless it's negative. I think Bader supports something like the greedy version. I'm somewhat unsure on this.

Thanks for this John. I agree that even if you use some form of classical utilitarianism, the future might still plausibly be net negative in value. As far as I can tell, Bostrom and co don't consider this possibility when they argue the value of existential risk research, which I think is a mistake. They mostly talk about the expected number of human lives in the future if we don't succumb to X-risk, assuming they are all (or mostly) positive.

Hello Michael,

I think the key point of John's argument is that he's departing from classical utilitarianism in a particular way. That way is to say future happy lives have no value, but future bad lives have negative value. The rest of the argument then follows.

Hence John's argument isn't a dissent about any of the empirical predictions about the future. The idea is that you the ANU can agree with Bostrom et al. about what actually happens, but disagree on how good it is.

[anonymous]7y0
0
0

Thanks for your comment. I agree with the Michael Plant's response below. I am not saying that there will be a preponderance of suffering over pleasure in the future. I am saying that if you ignore all future pleasure and only take account of future suffering, then the future is astronomically bad.

People like Bostrom have thoroughly considered how valuable the future might be. The view in existential risk reduction circles is simply that the future has positive expected value on likely moral systems. There are a bunch of arguments for this. One can argue from improvements to welfare, decreases in war, emergency of more egalitarian movements over time, anticipated disappearance of scarcity, and reliance on factory farming, increasing societal wisdom over time, and dozens of other reasons. One way of thinking about this if you are a symmetric utilitarian is that we don't have much reason to think either of pain and pleasure is more energy efficient than the other (https://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html)[Are pain and please equally energy efficient]. Since a singleton would be correlated with some relevant values, it should produce much more pleasure than pain, so the future should have very net positive values. I think that to the extent that we can research this question, we can sit very confidently saying that for usual value systems, the future has positive expectation.

The reason that I think people tend to try to shy away from public debates on this topic, such as when arguing for the value of existential risk research, is that doing so might risk creating a false equivalence between themselves and very destructive positions, which would be very harmful.