By John Halstead, Stefan Schubert, Joseph Millum, Mark Engelbert, Hayden Wilkinson, and James Snowden. Cross-posted from the Centre for Effective Altruism blog. A direct link to the article can be found here.

Abstract

In this paper, we discuss Iason Gabriel’s recent piece on criticisms of effective altruism. Many of the criticisms rest on the notion that effective altruism can roughly be equated with utilitarianism applied to global poverty and health interventions which are supported by randomised control trials and disability-adjusted life year estimates. We reject this characterisation and argue that effective altruism is much broader from the point of view of ethics, cause areas, and methodology. We then enter into a detailed discussion of the specific criticisms Gabriel discusses. Our argumentation mirrors Gabriel’s, dealing with the objections that the effective altruist community neglects considerations of justice, uses a flawed methodology, and is less effective than its proponents suggest. Several of the criticisms do not succeed, but we also concede that others involve issues which require significant further study. Our conclusion is thus twofold: the critique is weaker than suggested, but it is useful insofar as it initiates a philosophical discussion about effective altruism and highlights the importance of more research on how to do the most good.

 

Click here to go to the article.

12

0
0

Reactions

0
0
Comments17
Sorted by Click to highlight new comments since:

Upvoted. Maybe this is just what's typical for academic style, but when you address Gabriel it seems you're attributing the points of view presented to him, while when I read the original paper I got the impression he was collating and clarifying some criticisms of effective altruism so they'd be coherent enough they'd affect change. One thing about mainstream journalism is it's usually written for a specific type of audience the editor has in mind, even if the source claims to be seeking a general audience, and so they spin things a certain way. While criticisms of EA definitely aren't what I'd call sensationalistic, they're written in a style of a rhetorical list of likes and dislikes about the EA movement. It's taken for granted the implied position of the author is somehow a better alternative than what EA is currently doing, as if no explanation is needed.

Gabriel fixes this by writing the criticisms of EA up in a way that we'd understand what about the movement would need to change to satisfy critics, if we were indeed to agree with critics. Really, except for the pieces published on the Boston Review, I feel like other criticisms of EA were written not for EA at all, but rather a review of EA for other do-gooders as a warning to stay away from the movement. It's not the job of critics to solve all our problems for us, but being a movement that is at least willing to try to change in the face of criticism, it's frustrating nobody takes us up on the opportunity given what blindspots we may have and tries to be constructive.

[anonymous]1
0
0

Thanks for the comment! We do go to some length to make clear that we're unsure whether Gabriel himself endorses the objections. We're pretty sure he endorses some (systemic change, counterfactuals), but less sure about the others.

Gabriel argues that the effective altruism community should heed the issue of moral disagreement

Nobody told him that MacAskill has done some of the most serious recent work on this?

Typo at the bottom of page 10 (should be "two problems" not "two problem").

This is a good paper and well done to the authors.

I think section 3 is very weak. I am not flagging this as a flaw in the argument just the area that I see the most room for improvement in the paper and/or the most need for follow up research. The authors do say that more research is needed which is good.

Some examples of what I mean by the argument is weak:

  • The paper says it is "reasonable to believe that AMF does very well on prioritarian, egalitarian, and sufficientarian criteria". "reasonable to believe" is not a strong claim. No one has made any concerted effort to map the values of people who are not utilitarians, to come up with metrics that may represent what such people care about and evaluate charities on these metrics. This could be done but is not happening.
  • The paper says Iason "fail[s] to show that effective altruist recommendations actually do rely on utilitarianism" but the paper also fail to show that effective altruist recommendations actually do not rely on utilitarianism.
  • Etc

Why I think more research is useful here:

  • Because when the strongest case you can make for EA to people with equality as a moral intuition begins by saying "it is reasonable to believe . . . " it is so hard to make EA useful to such people. For example when I meet people new to EA who care a lot about equality making the case that: 'if you care about minimising suffering this 'AMF' thing comes up top and it is reasonable to assume that if you care about equality it also could be at the top because it is effective and helps the poorest' carries a lot less weight than perhaps saying: 'hey we funded a bunch of people who care foremost about equality, like you do, to map out their values and rank charities and this came top.'

Note cross-posting a summarised comment on this paper from a discussion on Facebook https://www.facebook.com/groups/798404410293244/permalink/1021820764618273/?comment_id=1022125664587783

No one has made any concerted effort to map the values of people who are not utilitarians, to come up with metrics that may represent what such people care about and evaluate charities on these metrics.

This appears to be demonstrably false. And in very strong terms given how strong a claim you've made and how I only need to find one person to prove it wrong. We have many non-utilitarian egalitarian luminaries making a concerted effort to come up with exactly the metrics that would tell us, based on egalitarian/priorian principles, what charities/interventions we should prioritize:

  • Adam Swift: Political theorist, sociologist, specializes in liberal egalitarian ethics, family values, communitarianism, school choice, social justice.

  • Ole Norheim: Harvard Physician, Medical ethics prof. working on distributive theories of justice, fair priority setting in low and high income countries. Is the head of the Priority Setting in Global Health (2012-2017) research project which is aiming to do exactly what you claimed nobody is working on.

  • Alex Voorhoeve: Egalitarian theorist, member of Priority Setting in Global Health project, featured on BBC, has co-authored with Norheim unsurprisingly

  • Nir Eyal: Harvard Global Health and Social Medicine Prof., specializes in population-level bioethics. Is currently working on a book that defends an egalitarian consequentialist (i.e. instrumental egalitarianism) framework for evaluating questions in bioethics and political theory.

All of these folks are mentioned in the paper.

I don't want to call these individuals Effective Altruists without having personally seen/heard them self-identify with it but they have all publicly pledged 10% of their lifetime income to effective charities via Giving What We Can.

So if the old adage "Actions speak louder than words" still rings true then these non-utilitarians are far "more EA" than any number of utilitarians who publicly flaunt that they are part of effective altruism, but then do nothing.

And none of this should be surprising. The 2015 EA Survey shows that only 56% of respondents identify as utilitarian. The linked survey results argue that this sample accurately estimates the actual EA population. This would mean that ~44% of all EAs are non-utilitarian. That's a lot. So even if utilitarians are the single largest majority, of course the rest of us non-utilitarian EAs aren't just lounging around.

Update: Nir Eyal very much appears to self-identify as an effective altruist despite being a non-utilitarian. See interview with Harvard EA here specifically about non-utilitarin effective altruism and this article on Effective Altruism back in 2015. Wikipedia even mentions him as a "leader in Effective Altruism"

[anonymous]1
0
0

Hi,

  1. We reference a number of lines of evidence suggesting that donating to AMF does well on sufficientarian, prioritarian, egalitarian criteria. See footnotes 23 and 24. Thus, we provide evidence for our conclusion that 'it is reasonable to believe that AMF does well on these criteria'. This, of course, is epistemically weaker than claims such as 'it is certain that AMF ought to be recommended by prioritarians, egalitarians and sufficientarians'. You seem to suggest that concluding with a weak epistemic claim is inherently problematic, but that can't be right. Surely, if the evidence provided only justifies a weak epistemic claim, making a weak epistemic claim is entirely appropriate.

  2. You seem to criticise us for the movement having not yet provided a comprehensive algorithm mapping values on to actions. But arguing that the movement is failing is very different to arguing that the paper fails in its own terms. It is not as though we frame the paper as: "here is a comprehensive account of where you ought to give if you are an egalitarian or a prioritarian". As you say, more research is needed, but we already say this in the paper.

  3. Showing that 'Gabriel fails to show that EA recommendations rely on utiltiarianism' is a different task to showing that 'EA recommendations do not rely on utilitarianism'. Showing that an argument for a proposition P fails is different to showing that not-P.

An interesting exchange, although I feel like the rebuttal somewhat misrepresents Gabriel's agrument regarding systemic change. A steelman version of his argument would factor in the quantification bias, pointing out that due to extreme uncertainty in expected value estimation for some systemic change type interventions, something like AMF would usually easily come out on top.

I read him as saying that EA community would not support e.g. the abolishionist movement were it around then, precisely because of the difficulties in EV calculations, and I agree with him on that.

(I also think that OpenPhil does very important work in that direction)

I read him as saying that EA community would not support e.g. the abolishionist movement were it around then, precisely because of the difficulties in EV calculations, and I agree with him on that.

Just as an aside, I'm not sure that's obvious. John Stuart Mill was a leader in the abolition movement. He was arguably the Peter Singer of those times.

Turning to current issues, ending factory farming is also a cause that likely requires large scale social change through advocacy, and lots of EAs work on that.

Just as an aside, I'm not sure that's obvious. John Stuart Mill was a leader in the abolition movement. He was arguably the Peter Singer of those times.

And Bentham was ahead of the curve on:

  • Abolition of slavery
  • Calling for legal equality of the sexes
  • The first known argument for legalization of homosexuality in England
  • Animal rights
  • Abolishing the death penalty and corporal punishment (including of children)
  • Separation of church and state
  • Freedom of speech

precisely because of the difficulties in EV calculation

The extensive work on factory farming is certainly one counterexample, but the interest in artificial intelligence may be a more powerful one on this point.

Perhaps "systemic change bias" needs to be coined, or something to that effect, to be used in further debates.

Might be useful in elucidating why people criticizing EAs always mischaracterize us as not caring about systemic change or harder-to-quantify causes.

Might be useful in elucidating why people criticizing EAs always mischaracterize us as not caring about systemic change or harder-to-quantify causes.

Those causes get criticized because of how hard to quantify they are. The relatively neglected thing is recognizing both strands, and arguing for Goldilocks positions between 'linear clear evidence-backed non-systemic charity' and 'far too radical for most interested in systemic change.'

Couldn't you just counter and say that if EA were around back then and it had just started out trying to figure out what the most good is that they would not support the abolitionist movement because of difficult EV calculations and because they are spending resources elsewhere? However, if the EA community existed back then and had matured a bit to the stage that something like OpenPhil existed back then as well (OpenPhil of course being an EA org for those reading who don't know) then they would have very likely supported attempts at cost-effectiveness campaigns to support the abolitionist movement.

The EA community like all entities is an entity in flux. I don't like hearing "If it existed back then then it wouldn't support the abolishionist movement and therefore it has problems, and this may implicitly imply it is bad because it is thinking in a bad quantification bias naughty way." This sounds like an unfair mischaracterization to me - especially given that you can just cherry-pick what the EA community was like at a particular time (how much it knows) and how many resources it has specifically so that it wouldn't support the abolishionist movement and then claim the reason is quantification bias.

What's better is "if EA existed back then as it existed in 2012/2050/20xy with x resources then it would not support the abolishionist movement" and now the factor of time and resources might very well be a much better explanation for why EA wouldn't have supported the abolishionist movement, not quantification bias.

Consider the EA community of 2050 that would have decades worth of knowledge built on how to deal with harder to quantify causes.

I suspect that if the EA community of 2050 had the resources of YMCA or United Way and existed in the 18th Century, it would have supported the hell out of the abolitionist movement.

I notice this in your paper:

He also mentions that cost-effectiveness analysis ignores the significance of ‘iteration effects’ (page 12)

Gabriel uses Iterate in his Ultra-poverty example so I'm fairly certain how he uses iterate here is what he was trying to refer to

Therefore, they would choose the program that supports literate men. When this pattern of reasoning is iterated many times, it leads to the systematic neglect of those at the very bottom, a trend exemplified by how EAs systematically neglect focusing on the very bottom in the first world. This is unjust (with my edits)

So it's the same with using the DALY to assess cost-effectiveness. He is concerned that if you scale up or replicate a program that is cost-effective due to DALY calculations that it would ignore iteration effects where a subset of those receiving the treatment might systematically be neglected - and that this goes against principles of justice and equality. Therefore using cost-effectiveness as a means of deciding what is good or what charity to fund is on morally shaky ground (according to Gabriel). This is how I understood him.

[anonymous]2
0
0

Thanks for this. I have two comments. Firstly, I'm not sure he's making a point about justice and equality in the 'quantification bias' section. If his criticism of DALYs works, then it works on straightforward consequentialist grounds - DALYs are the wrong metric of welfare. (On this, see our footnote 41.)

Secondly, the claim about iteration effects is neither necessary nor sufficient to get to his conclusion. If the DALY metric inappropriately ignores hope, then it doesn't really matter whether a decision about healthcare resource distribution on the basis of DALYs is made once or is iterated. Either way, DALYs would ignore an important component of welfare.

Put this way I change my mind and agree it is unclear. However, to make your paper stronger, I would have included something akin to what you just wrote in the paper to make it clear why you think Gabriel's use of "iteration effects" is unclear and not the same as his usage in the 'priority' section.

I'm not sure how important clarifying something like this is for philosophical argumentation, but for me, this was the one nagging kink in what is otherwise fast becoming one of my favourite "EA-defense" papers.

[anonymous]0
0
0

Thanks for the feedback. From memory, I think at the time we thought that since it didn't do any work in his argument, we didn't think that could be what he meant by it.

Curated and popular this week
Relevant opportunities