Hide table of contents

TL;DR

In population ethics, the total view entails the repugnant conclusion. Some people argue that, since the repugnant conclusion is obviously false, the total view is incorrect. I sum up two objections by Michael Huemer to the claim that the repugnant conclusion is obviously false.

First, its repugnance may be explained not by its falsehood, but by our biases and cognitive limitations.

  1. Instead of asking ourselves if A is better than Z, we tend to ask which of the two worlds we would personally ‌prefer to live in.
  2. Scope insensitivity and our difficulty in compounding tiny quantities make the repugnant conclusion unintuitive.
  3. We tend to underestimate the quality of a barely worth living life.

Second, the repugnant conclusion necessarily follows from highly plausible moral principles. It is impossible to avoid it without accepting at least one of the following:

  1. Cyclical preferences.
  2. Equality in the distribution of utility is intrinsically bad.
  3. A situation that Pareto dominates another one is not preferable to the Pareto dominated one.

Why I wrote this post

I’m a facilitator for the EAVP Intro Program. When I discuss longtermism with participants, population ethics usually comes up. On a few occasions, the repugnant conclusion has been brought up as an objection to the total view.

A few months ago, I wrote a paper for university arguing against this objection. Making a post out of it is a quick task, and it seems useful to have a resource on the forum to share with Intro Program participants (or anyone who might be interested) when they bring up this argument. So here it is!

What is the repugnant conclusion?

The repugnant conclusion was originally stated by Derek Parfit (1984, p.388):

For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better, even though its members have lives that are barely worth living.

This clearly follows from the total view, according to which the value of a world X is represented by the function

Where NX is the number of people who live in X and WX is their average well-being level.

Consider two hypothetical worlds, A and Z, to which we assign some arbitrary well-being levels measured on a cardinal scale. Positive numbers represent lives worth living, while negative ones represent lives not worth living. A contains ten billion people (NA) with a well-being level of 100 (WA). Z contains the same population of A plus 9.99 trillion people, for a total of ten trillion people (NZ), all with a well-being level of 0.2 (WZ). A simple multiplication reveals that:

Therefore, the total view favors world Z over world A, and is thus committed to the repugnant conclusion. 

According to Tännsjö (2002, p.1), "repugnant" is to be understood as "obviously false". If the repugnant conclusion is actually repugnant in this sense, then the total view must be wrong, for it entails such a false claim.

In the rest of this post, I will sum up two objections by Michael Huemer to the claim that the repugnant conclusion is obviously false.

Our unrepugnant intuition is biased

Huemer (2008, p.907) claims that our "unrepugnant intuition" that A is better than Z may not be as reliable as it first appears to be. When facing the choice between A and Z, several biases and cognitive limitations alter our judgment. Huemer individuates four of them.

Egoistic bias

The egoistic bias leads us to focus on the wrong question. Instead of asking ourselves if A is better than Z, we tend to ask which of the two worlds we would personally ‌prefer to live in. If you were any of the (fewer) people living in A, you'd be having a much better time than if you were any of the (many) people living in Z (WA is 500 times larger than WZ), so the answer to the second question is obviously A. But this is totally irrelevant when answering the first question.

Tännsjö (2002, p.3) proposes a clever workaround. In order to choose impartially in which of the two worlds to live, we have to perform a Rawlsian thought experiment. Would A continue to be more appealing than Z behind a veil of ignorance? If our own existence were at stake, probably not.

In our example, NZ is 1000 times larger than NA, so the probability of any person chosen at random among the population of Z to exist in A is 0.001. That means that our choice is not between WA and WZ, but between WA with probability 0.001 and WZ with certainty. The expected value of choosing to live in A is thus 0.1, while the expected value of choosing to live in B is 0.2. A rational agent would thus prefer B.

Scope insensitivity

Scope insensitivity undermines our ability to have reliable intuitions regarding large quantities. When facing the choice between A and Z, the number of lives that would exist has very little, if any, effect on the value we intuitively attach to each world.

Imagine population A—all ten billion of them. Now imagine the ten trillion people making up population Z. Perhaps you have successfully imagined a larger quantity of people. But was it a thousand times larger? Our limited cognitive capacities do not allow us to picture precisely such enormous numbers of people, much less to sympathize with all of them. As a result, while WA and WZ intuitively seem drastically different, NA and NZ do not. A, therefore, intuitively seems far more attractive than Z.

Compounding tiny quantities

We also struggle with compounding tiny quantities. We neglect small numbers, forgetting that they can become very large when compounded several times. This may explain the unintuitiveness of the thought that a sufficiently high number of low-utility lives has a greater value than a smaller number of high-utility ones.

How can we avoid both ‌issues? By shutting up and multiplying, of course.

What life counts as “barely worth living”?

We tend to underrate the quality of lives barely worth living. According to Tännsjö (2002, p.5-6), our highly selective memory has a tight grip on the few blissful moments we experience, but it easily lets go of all the time spent below the threshold above which our lives are worth living. This leads us to overestimate the utility of our own life, which in turn makes us think that a life barely worth living must be a terrible one.

But, if Tännsjö is right, a life barely worth living is not so different from the typical life led by a citizen of an affluent Western country. He claims that

if only our basic needs are satisfied, then most of us are capable of living lives that, on balance, are worth experiencing. However, no matter how ‘lucky’ we are, how many ‘gadgets’ we happen to possess, we rarely reach beyond this level.

Thus, Z does not look "like a vast concentration camp". Instead, it contains lives quite like ours, and much better than the lives of people who are dying from painful diseases or those of non-human animals in factory farms.

Conclusion

The argument that our intuition deludes us does not prove that Z is better than A. Our biases and cognitive limitations may be simply enhancing the repugnance of a false conclusion. It suggests, however, that we should at least question our intuitions. 

Our brains did not evolve to deal with gigantic or minuscule numbers, let alone their products. Furthermore, we might focus on the wrong question, namely whether we would personally prefer to live barely worth living lives or blissful ones. Or we might even mistake "barely worth living" for "terrible". 

Once we acknowledge the unreliability of our intuitions when considering the repugnant conclusion, its repugnance seems at least doubtful. Let’s now move on to the second argument.

Avoiding the repugnant conclusion is problematic

Tännsjö (2002, p.16) makes a very strong claim. Not only is the avoidance of the repugnant conclusion not, pace Parfit, a desideratum of any plausible moral theory, but it is a red flag. Since the repugnant conclusion follows from plausible moral principles, we must be suspicious of any theory that does not lead to it. In explaining this position, I will follow the Benign Addition Argument offered by Huemer (2008, p.901). 

The repugnant conclusion follows from plausible principles

Consider the following principles, which assume that we are comparing worlds with respect to utility alone.

The Benign Addition Principle: If worlds X and Y are so related that X would result from increasing the well-being of everyone in Y by some amount and adding some new people with worthwhile lives, then X is better than Y.

Non-anti-egalitarianism: If X and Y have the same population, but X has a higher average utility, a higher total utility, and a more equal distribution of utility than Y, then X is better than Y.

Transitivity: If X is better than Y and Y is better than Z, then X is better than Z.

These three premises, taken together, necessarily lead to the repugnant conclusion, so the only way to avoid it is to reject at least one of them.

To show this, we can go back to our original example and add a third hypothetical world: A+. This world contains the same ten billion people of A, but with a slightly greater well-being level of 101, and another 9.99 trillion new people with a well-being level of 0.05. Let us also suppose that the population of A+ and Z is the same, although with different well-being levels.

By the Benign Addition Principle, A+ is clearly better than A, since it can be obtained by increasing the well-being of everyone in A by 1 and adding some new people with slightly worthwhile lives. By Non-anti-egalitarianism, Z is better than A+, since they have the same population, but Z has a higher average utility, a higher total utility, and a more equal distribution of utility. This is true because the average well-being level of A+ is roughly 0.15, while WZ is 0.2. By Transitivity, given that Z is better than A+ and A+ is better than A, we arrive at the repugnant conclusion that Z is better than A.

What happens if we reject those principles

Intuitively, the Benign Addition Principle, Non-anti-egalitarianism, and Transitivity seem very reasonable. Can we safely reject any of them?

In the case of Transitivity, it seems implausible. By rejecting it, we would justify cyclical preferences. That is, preferring A to B, B to C, and C to A. But if you have such preferences, then one can easily pump money out of you by offering, for a small price for each transaction, to give you C in exchange for A, then B in exchange for C, then A in exchange for B, and so on indefinitely. Such preferences are thus irrational.

What about rejecting Non-anti-egalitarianism? It seems safe to assume that increasing total and average utility is a good thing. Therefore, the only way for Non-anti-egalitarianism to be false is for equality in the distribution of utility to be intrinsically bad. I doubt ‌anyone is willing to accept such an implication.

Finally, let’s consider the Benign Addition Principle. If A+ is the result of increasing the well-being of everyone in A by some amount and adding some new people with worthwhile lives, then A+ Pareto dominates A. Every single person prefers his or her existence in A+ to his or her existence or non-existence in A. It thus seems hard to argue that, contrary to the opinion of every single person involved, A is better than A+.

Conclusion

Any population axiology that avoids the repugnant conclusion rejects at least one principle between the Benign Addition Principle, Non-anti-egalitarianism, and Transitivity. Of course, this argument does not conclusively prove that Z is better than A. By pointing out this trade-off, however, it makes a strong case against the repugnance, understood as obvious falsehood, of such a claim.

As Zuber et al. (2021) claimed, it is not necessary for an adequate population axiology to avoid the repugnant conclusion. To reject the total view, therefore, it is not sufficient to show that it entails the repugnant conclusion.

Acknowledgments

Thanks to Justis Mills, Lorenzo Buonanno, and Adam Elwood for helpful comments and feedback.

References

Huemer, M. (2008). In Defence of Repugnance, Mind 117.468: 899–933.

Parfit, D. (1984). Reasons and Persons, Oxford: Clarendon Press.

Tännsjö, T. (2002). Why We Ought to Accept the Repugnant Conclusion, Utilitas, 14.3: 339–59.

Zuber, S., Venkatesh, N., Tännsjö, T., Tarsney, C., Stefánsson, H., Steele, K., . . . Asheim, G. (2021). What Should We Agree on about the Repugnant Conclusion? Utilitas, 33(4): 379-383.


 

30

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since:

Thank you for sharing this post - it's well written, well structured, relevant and concise. (And I agree with the conclusion, which I'm sure makes me like it more!)

Glad you enjoyed it!

Thanks for the post!

I'm particularly interested in the third objection you present - that the value of "lives barely worth living" may be underrated.

I wonder to what extent the intuition that world Z is bad compared to A is influenced by framing effects. For instance, if I think of "lives net positive but not by much", or something similar, this seems much more valueable than "lives barely worth living", allthough it means the same in population ethics (as I understand it).

I'm also sympathetic to the claim that ones response to world Z may be affected by ones perception of the goodness of the ordinary (human) life. Perhaps, buddhists, who are convinced that ordinary life is pervaded with suffering, view any live that is net-positive as remarkably good.

Do you know if there exists any psychological literature on any of these two hypotheses? I'd be interested to research both.

However, even if we'd show that the repugnance of the repugnant conclusion is influenced in these ways or even rendered unreliable, I doubt the same would be true for the "very repugnant conclusion":

for any world A with billions of happy people living wonderful lives, there is a world Z+ containing both a vast amount of mildly-satisfied lizards and billions of suffering people, such that Z+ is better than A.

(Credit to joe carlsmith who mentioned this on some podcast)

You raised some interesting points!

It seems plausible that the framing effect could be at play here, and that different people would draw the line between a life that's worth living and one that's not at different points. I don't know about any literature about this, but maybe I'd give a look at the Happier Lives Institute's work.

And I'll need to think more seriously about the very repugnant conclusion. That's a tough one!

Instead of rejecting any of the Benign Addition Principle, Non-anti-egalitarianism, and Transitivity, you can reject the Independence of Irrelevant Alternatives, and I think this is more plausible than rejecting Transitivity and pretty plausible generally, although many may disagree. See my comment here illustrating: https://forum.effectivealtruism.org/posts/DCZhan8phEMRHuewk/person-affecting-intuitions-can-often-be-money-pumped?commentId=ZadcAxa2oBo3zQLuQ

I didn't think of that!

I'm curious about why you find rejecting IIA generally plausible.

I think it's plausible that some interests matter in relative terms between possible outcomes, rather than only in terms that can be described absolutely. I think it can be the case that it's neither better nor worse in itself to have a specific preference at all, no matter how satisfied or frustrated, even though it's better for it to be more satisfied between two outcomes both in which it exists. Say a child's dream to go to the moon, or the wish of a specific person who can't walk to walk, or the wish to be with a loved one (e.g. grief over loss). I don't think taking away a frustrated preference makes someone better off in itself, except for certain kinds of preferences. I don't think adding a (satisfied) preference is ever good in itself.

Part of the reason might be that there's no natural unique 0 or neutral point, i.e. a single degree of preference satisfaction/frustration where we should be indifferent about having that preference at all. Or, at least, you can imagine degrees between perfectly satisfied and perfectly frustrated, but no natural way to set some partial satisfaction/frustration states on either side of 0.

Other common intuitions may violate IIA. We might say you're not obligated to make a great sacrifice for others, but if you are going to, it could be obligatory to do the most good with the same level of sacrifice (see this example and the discussion in that thread). Similarly for having a child: you have no obligation to have one at all, and it may be permissible to have a child as long as they at least have a good life, but if you do have a child, and you could easily guarantee a much better life for them than just good, you may be obligated to do so. Frick discusses these as "conditional reasons".

I guess these reasons could apply similarly to transitivity. An important issue with intransitivity is that it's not clear what act to choose if each available option is beaten by another, but intransitive views can be turned into transitive views that violate IIA through voting methods, especially beatpath/Schulze, like in this paper.

[anonymous]2
0
0

Perhaps I am thinking about this all wrong, but isn't it the case that whether or not Z is better than A, most people would prefer a "ZA" world (Z's population AND A's happiness) to Z? 

Therefore, the repugnant conclusion is only a problem if there is , in fact, a tradeoff between population size & happiness. However, this does not appear to be the case in a non-Malthusian world. 

For instance, it seems pretty clear that we live in a "ZA" world compared to the world of only 300 years ago. Population, life expectancy and dignity all improved dramatically  at the same time. 

There is also no clear reason to believe that a ZA world compared to today's world is impossible either.

If, instead of the possible, we turn to the likely, the trend appears to be that population ultimately stabilizes. As such the real world task within our lifespan should be increasing the happiness of a mostly stable population, which is about as far as you can be from a repugnant conclusion sort of dilemma.

Yes, and the total view would bring you to that conclusion as well!

All ethical arguments are based on intuition, and here this one is doing a lot of work: "we tend to underestimate the quality of lives barely worth living". To me this is the important crux because the rest of the argument is well-trodden. Yes, moral philosophy is hard and there are no obvious unproblematic answers, and yes, small numbers add up. Tännsjö, Zapffe, Metzinger, and Benatar play this weird trick where they introspectively set an arbitrary line that separates net-negative and net-positive experience, extrapolate it to the rest of humanity, and based on that argue that most people spend most of their time on the wrong side of it. More standard intuitions point in the opposite direction; for not-super-depressed people, things can and do get really bad before not-existing starts to outshine existing! Admittedly "not-super-depressed people" is a huge qualifier, but on Earth the number of people who have, from our affluent Western country perspective, terrible lives, yet still want to exist, swamps the number of the (even idly) suicidally depressed. It's very implausible to me that I exist right above this line of neutrality when 1) most people have much worse lives than me and 2) they generally like living.

And whenever I see this argument that liking life is just a cognitive bias I imagine this conversation:

A: How are you?

B: Fine, how are–

A: Actually your life sucks.

Curated and popular this week
Relevant opportunities