Hide table of contents

Gianfranco Pellegrino has written an interesting essay arguing that effective altruism leads to what he calls the Altruistic Repugnant Conclusion. In this post, I will provide a brief version of his argument and then note one possible response.

The Argument

Pellegrino beings by identifying the following as the core tenet of effective altruism:

"Effective Altruist Maximization (AM): We ought to do the most good we can, maximizing the impact of donating to charities on the margin and counterfactually —which means that among the available charities, the one that is most effective on the margin should be chosen" (2).

He next argues that this core tenet can best be articulated as the following principle:

"Doing the most good amounts to bringing about the greatest benefit to the greatest number" with "gains in diffusion compensat[ing] for losses in size, and vice versa" (7, 9).

He then poses a hypothetical in which an altruist is offered a choice.* The altruist can:

"[1] provide consistent, full nutrition and health care to 100 people, such . . . that instead of growing up malnourished they spend their 40-years long lives relatively healthy; [or]

[2] prevent[] one case of relatively mild non-fatal malaria [say, a fever that lasts a few days] for [each of] 1 billion people, without having a significant impact on the rest of their lives" (14).

Pellegrino argues that choosing the second option (the Altruistic Repugnant Conclusion) is a "necessary consequence" of the principle from above, but that "[b]ringing about very tiny, but immensely diffused, benefits instead [of] less diffused, but more substantial, benefits is seriously wrong" (15).

Based on this, he claims that "either effective altruists should accept [the Altruistic Repugnant Conclusion], thereby swallowing its repugnance, or they should give up their core tenet [of Effective Altruist Maximization]" (20-21).

You can read Pellegrino's full essay here.

A Possible Response

As Pellegrino acknowledges, "EA has often been the target of criticisms historically pressed against standard Utilitarianism[,] [and his] paper [is] no exception" (21). In light of this, one way to respond to his argument is to borrow from responses to other critiques of effective altruism that are premised on effective altruism accepting utilitarianism. 

Specifically, one could argue that "[Pellegrino's] arguments appeal only to hypothetical (rather than actual) cases in which there is a supposed conflict between effective altruist recommendations and [intuition] and thus fail to show that effective altruist recommendations actually do [lead to a repugnant conclusion]." 

Feel free to share other responses to Pellegrino's argument. 

*Pellegrino's hypothetical is based on a similar hypothetical posed by Holden Karnofsky. In both Karnofsky's hypothetical and Pellegrino's hypothetical, there are three options. I have limited the hypothetical to two options for the sake of simplicity. 

2

0
0

Reactions

0
0
Comments14
Sorted by Click to highlight new comments since:

Thinking from the perspective of a beneficiary, I would rather get $100 than remove a 1/10,000,000 risk of death. That level of risk is in line with traveling a few kilometers by walking, and a small fraction of the risk associated with a day skiing: see the Wikipedia entry on micromorts. We all make such tradeoffs every day, taking on small risks of large harm for high probabilities of smaller benefits that have better expected value.

So behind the veil of ignorance, for a fixed population size, the 'altruistic repugnant conclusion' is actually just what beneficiaries would want for themselves. 'Repugnance' would involve the donor prioritizing their scope-insensitive response over the interests of the beneficiaries.

An article by Barbara Fried makes a very strong case against this sort of anti-aggregationism based on the ubiquity of such tradeoffs.

Separately in the linked Holden blog post it seems that the comparison is made between 100 large impacts and 10,000 small impacts that are well under 1% as large. I.e. the hypothetical compares larger total and per beneficiary impacts against a smaller total benefit distributed over more beneficiaries.

That's not a good illustration for anti-aggregationism.

(2) Provide consistent, full nutrition and health care to 100 people, such that instead of growing up malnourished (leading to lower height, lower weight, lower intelligence, and other symptoms) they spend their lives relatively healthy. (For simplicity, though not accuracy, assume this doesn’t affect their actual lifespan – they still live about 40 years.)

This sounds like improving health significantly, e.g. 10% or more, over 14,600 days each, or 1.46 million days total. Call it 146,000 disability-adjusted life-days.

(3) Prevent one case of relatively mild non-fatal malaria (say, a fever that lasts a few days) for each of 10,000 people, without having a significant impact on the rest of their lives.

Let's say mild non fatal malaria costs half of a life-day per day, and 'a few days' is 6 days. Then the stakes for these 10,000 people are 30,000 disability-adjusted life-days.

146,000 adjusted life days is a lot more than 30,000 adjusted life-days.

[anonymous]2
0
0

This is true. Still, for many people, intuitions against aggregation seem to stand up even if the number of people with mild ailments increases without limit (millions, billions, and beyond). For some empirical evidence, see http://eprints.lse.ac.uk/55883/1/__lse.ac.uk_storage_LIBRARY_Secondary_libfile_shared_repository_Content_Voorhoeve,%20A_How%20should%20we%20aggregate_Voorhoeve_How%20should%20we%20aggregate_2014.pdf

Thinking from the perspective of a beneficiary, I would rather get $100 than remove a 1/10,000,000 risk of death.

Would you also volunteer to be killed so that 10,000,000 people just like you could have $100 that they could only spend to counterfactually benefit themselves?

I think the probability here matters beyond just its effect on the expected utility, contrary, of course, to EU maximization. I'd take $100 at the cost of an additional 1/10,000,000 risk of eternal torture (or any outcome that is finitely but arbitrarily bad). On the other hand, consider the 5 following worlds:

A. Status quo with 10,000,000 people with finite lives and utilities. This world has finite utility.

B. 9,999,999 people get an extra $100 compared to world A, and the other person is tortured for eternity. This world definitely has a total utility of negative infinity.

C. The 10,000,000 people each decide to take $100 for an independent 1/10,000,000 risk of eternal torture. This world, with probability ~ 1-1/e ~ 0.63 (i.e. "probably") has a total utility of negative infinity.

D. The 10,000,000 people together decide to take $100 for a 1/10,000,000 risk that they all are tortured for eternity (i.e. none of them are tortured, or all of them are tortured together). This world, with probability 9,999,999/10,000,000 has finite utility.

E. Only one out of the 10,000,000 people decides to take $100 for a 1/0,000,000 risk of eternal torture. This world, with probability 9,999,999/10,000,000 has finite utility.

I would say D >> E > A >>>> C >> B, despite the fact that in expected total utility, A >>>> B=C=D=E. If I were convinced this world will be reproduced infinitely many times (or e.g. 10,000,000 times) independently, I'd choose A, consistently with expected utility.

So, when I take $100 for a 1/10,000,000 risk of death, it's not because I'm maximizing expected utility; it's because I don't care about any 1/10,000,000 risk. I'm only going to live once, so I'd have to take that trade (or similar such trades) hundreds of times for it to even start to matter to me. However, I also (probably) wouldn't commit to taking this trade a million times (or a single equivalent trade, with $100,000,000 for a ~0.1 probability of eternal torture; you can adjust the cash for diminishing marginal returns). Similarly, if hundreds of people took the trade (with independent risk), I'd start to be worried, and I'd (probably) want to prevent a million people from doing it.

It might be objected that the problem is imagining how the benefits of sparing few days of malaria to 1 billion people are aggregated, and that our feeling or repugnance derives from our failure to see that this aggregated benefit is immensely larger than the benefit of increased educational opportunities for few people. But this begs the question. The problem with ARC is exactly that to many of us the benefit of giving better education to 100 people seems worthy of giving up the tiny aggregated benefit of sparing few days of non-fatal malaria to 1 billion people.

I think he fails to do justice to this objection. It not mere question-begging to suggest that people's intuitions fail to deal with the large numbers correctly; it is a well-known fact that people's intuitions struggle to deal with such cases! This is commonly referred to as Scope Insensitivity - it occurs even in cases where the outcome 'should' be obvious.

[anonymous]4
0
0

I don't agree with the response suggested (recognising that it cites an article I co-authored). The DALY and QALY metrics imply the ARC. It seems reasonable that these metrics or ones similar are in some sense definitive of EA in global poverty and health.

Then the question is whether it is correct to aggregate small benefits. It's fair to say there is philosophical disagreement about this, but nevertheless (in my view) a strong case to be made that the fully aggregative view is correct. One way to approach this, probably the dominant way in moral philosophy, is to figure out the implications of philosophical views and then to choose between the various counterintuitive implications these have. e.g. you could say that the badness of minor ailments does not aggregate. Then you choose between the counterintuitive implications of this vs the aggregative view. This seems to be a bad way to go about it because it starts at the wrong level.

What we should do is assess at the level of rationales. The aggregative view has a rationale, viz (crudely) more of a good thing is better, Clearly, it's better to cure lots of mild ailments that it is to cure one. The goodness of doing so does not diminish: curing one additional person is always as valuable no matter how many other people you have cured. If so, it follows that curing enough mild ailments must eventually be better than curing one really bad ailment. A response to this needs to criticise this rationale not merely point out that it has a weird seeming implication. Lots of things have weird seeming implications, including e.g. quantum physics, evolution. Pointing out that quantum physics has counterintuitive implications should not be the primary level at which we debate the truth of quantum physics.

See this - http://spot.colorado.edu/~norcross/Comparingharms.pdf

Thanks for the link, Halstead. A very good article, but it doesn't totally cure my unease with aggregating across individuals. But I don't expect to ever find anything that is fully in line with intuitions, as I think intuitions are contradictory. :-)

This is, IMO, a pretty unpersuasive argument. At least if you are willing, like me, to bite the bullet that SUFFICIENTLY many small gains in utility could make up for a few large gains. I don't even find this particularly difficult to swallow. Indeed, I can explain away our feeling that somehow this shouldn't be true by appealing to our inclination to (as a matter of practical life navigation) to round down sufficiently small hurts to zero.

Also I would suggest that many of the examples that seem problematic are delibrately rigged so the overt description (a world with many people with a small amount of positive utility) presents the situation one way while the flavor text is phrased so as to trigger our empathetic/whats it like response as if it it didn't satisfy the overt description. For instance if we remove the flavor about it being a very highly overpopulated world and simply said consider a universe with many many beings each with a small amount of utility then finding that superior no longer seems particularly troubling. It just states the principle allowing addition of utilities in the abstract. However, sneak in the flavor text that the world is very overcrowded and the temptation is to imagine a world which is ACTIVELY UNPLEASANT to be in, i.e., one in which people have negative utility.

More generally, I find these kind of considerations far more compelling at convincing me I have very poor intuitions for comparing the relative goodness/badness of some kinds of situations and that I better eschew any attempt to rely MORE on those intuitions and dive into the math. In particular, the worst response I can imagine is to say: huh, wow I guess I'm really bad at deciding which situations are better or worse in many circumstances, indeed, one can find cases where A seems better than B better than C better than A considered pairwise, guess I'll throw over this helpful formalism and just use my intuition directly to evaluate which states of affairs are preferable.

  1. C^^ is better than C^, which is better than C;
  2. C^^ is better than B;
  3. B is better than C and C^.

But these three rankings are inconsistent, and one of them should go. To endorse all of them means to breach transitivity. Is EA committed to rejecting transitivity? This view is very controversial, and if EA required it, this would need serious inquiry and defence.

These rankings do not seem inconsistent to me? C^^ > B > C^ > C

edit: substituted with '^' due to formatting issues.

I cannot see the inconsistency there either, the whole section seems a bit strange as his "no death" example starts also containing death again about half way through.

(Note your first line seems to be missing some *'s)

(Note your first line seems to be missing some *'s)

Fixed, thanks.

I also had some difficulty understanding what he was arguing here.

From my perspective as an evolutionary psychologist, I wouldn't expect us to have reliable or coherent intuitions about utility aggregation for any groups larger than about 150 people, for any time-spans beyond two generations, or for any non-human sentient beings.

This is why consequentialist thought experiments like this so often strike me as demanding the impossible of human moral intuitions -- like expecting us to be able to reconcile our 'intuitive physics' concept of 'impetus' with current models of quantum gravity.

Whenever we take our moral intuitions beyond their 'environment of evolutionary adaptedness' (EEA), there's no reason to expect they can be reconciled with serious consequentialist analysis. And even within the EEA, there's no reason to expect out moral intuitions will be utilitarian rather than selfish + nepotistic + in-groupish + a bit of virtue-signaling.

If EAs have shitty lives many fewer people will become EAs. EAs should give up to the limit of their ability to have the same to slightly better lives than their peers by being more efficient with their money in purchasing happiness. Modulo other considerations such as reinvesting in human capital for giving later etc. This will also lead to greater productivity which is often too heavily discounted in people's calculation on the basis of the introspection illusion: thinking that their future self will be able to tank the hits from a crappier life for the sake of values than they actually will be able to.

Curated and popular this week
Relevant opportunities