By John Halstead, Stefan Schubert, Joseph Millum, Mark Engelbert, Hayden Wilkinson, and James Snowden. Cross-posted from the Centre for Effective Altruism blog. A direct link to the article can be found here.
Abstract
In this paper, we discuss Iason Gabriel’s recent piece on criticisms of effective altruism. Many of the criticisms rest on the notion that effective altruism can roughly be equated with utilitarianism applied to global poverty and health interventions which are supported by randomised control trials and disability-adjusted life year estimates. We reject this characterisation and argue that effective altruism is much broader from the point of view of ethics, cause areas, and methodology. We then enter into a detailed discussion of the specific criticisms Gabriel discusses. Our argumentation mirrors Gabriel’s, dealing with the objections that the effective altruist community neglects considerations of justice, uses a flawed methodology, and is less effective than its proponents suggest. Several of the criticisms do not succeed, but we also concede that others involve issues which require significant further study. Our conclusion is thus twofold: the critique is weaker than suggested, but it is useful insofar as it initiates a philosophical discussion about effective altruism and highlights the importance of more research on how to do the most good.
Couldn't you just counter and say that if EA were around back then and it had just started out trying to figure out what the most good is that they would not support the abolitionist movement because of difficult EV calculations and because they are spending resources elsewhere? However, if the EA community existed back then and had matured a bit to the stage that something like OpenPhil existed back then as well (OpenPhil of course being an EA org for those reading who don't know) then they would have very likely supported attempts at cost-effectiveness campaigns to support the abolitionist movement.
The EA community like all entities is an entity in flux. I don't like hearing "If it existed back then then it wouldn't support the abolishionist movement and therefore it has problems, and this may implicitly imply it is bad because it is thinking in a bad quantification bias naughty way." This sounds like an unfair mischaracterization to me - especially given that you can just cherry-pick what the EA community was like at a particular time (how much it knows) and how many resources it has specifically so that it wouldn't support the abolishionist movement and then claim the reason is quantification bias.
What's better is "if EA existed back then as it existed in 2012/2050/20xy with x resources then it would not support the abolishionist movement" and now the factor of time and resources might very well be a much better explanation for why EA wouldn't have supported the abolishionist movement, not quantification bias.
Consider the EA community of 2050 that would have decades worth of knowledge built on how to deal with harder to quantify causes.
I suspect that if the EA community of 2050 had the resources of YMCA or United Way and existed in the 18th Century, it would have supported the hell out of the abolitionist movement.