Hide table of contents

Hilary Greaves and Theron Pummer have put together an excellent collection of essays on effective altruism, which will be coming out soon. 

Effective altruism is still widely misunderstood in academia, so I took the opportunity to write up my thoughts on how effective altruism should be defined and why, and to respond to some of the most common misconceptions about effective altruism. I hope that having a precise definition will also help guard against future dilution or drift of the concept, or confusion regarding what effective altruism is about. You can find the essay (with some typos that will be corrected) here. Below I’ve put together an abridged version, highlighting the points that I’d expect to be most interesting for the Forum audience and trying to cut out some philosophical jargon; for a full discussion, though, the essay is better.


The definition of effective altruism

I suggest two principal desiderata for the definition. The first is to match the actual practice of those who would currently describe themselves as engaging in effective altruism. The second is to ensure that the concept has as much public value as possible. This means, for example, we want the concept to be broad enough to be endorsable by or useful to many different moral views, but still determinate enough to enable users of the concept to do more to improve the world than they otherwise would have done. This, of course, is a tricky balancing act.

My proposal for a definition (which is making CEA’s definition a little more rigorous) is as follows:

Effective altruism is:
(i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and
(ii) the use of the findings from (i) to try to improve the world.

(i) refers to effective altruism as an intellectual project (or ‘research field’); (ii) refers to effective altruism as a practical project (or ‘social movement’).

The definition is:

  • Non-normative. Effective altruism consists of two projects, rather than a set of normative claims.
  • Maximising. The point of these projects is to do as much good as possible with the resources that are dedicated towards it.
  • Science-aligned. The best means to figuring out how to do the most good is the scientific method, broadly construed to include reliance on careful rigorous argument and theoretical models as well as data.
  • Tentatively impartial and welfarist. As a tentative hypothesis or a first approximation, doing good is about promoting wellbeing, with everyone’s wellbeing counting equally. More precisely: for any two worlds A and B with all and only the same individuals, of finite number, if there is a one to one mapping of individuals from A to B such that every individual in A has the same wellbeing as their counterpart in B, then A and B are equally good.[1]

The ideas that EA is about maximising and about being science-aligned (understood broadly) are uncontroversial. The two more controversial aspects of the definition are that it is non-normative, and that it is tentatively impartial and welfarist. 


Effective Altruism as non-normative

The definition could have been normative by  making claims about how much one is required to sacrifice: for example, it could have stated that everyone is required to use as much of their resources as possible in whatever way will do the most good; or it could have stated some more limited obligation to sacrifice, such as that everyone is required to use at least 10% of their time or money in whatever way will do the most good.  

There are four reasons why I think the definition shouldn’t be normative: 

(i) a normative definition was unpopular among leaders of the community; in a survey of such leaders in 2015, 80% of respondents stated that they thought the definition should not include a sacrifice component and only 12.5% thought it should contain a sacrifice component. 

(ii) the normative position is endorsed only by a subset of the community; in the 2017 survey of 1843 members of the effective altruism community included the question, ‘Do you think of Effective Altruism more as an “opportunity” or an “obligation”?’ In response, 56.5% chose ‘moral duty’ or ‘obligation’, and 37.7% chose ‘opportunity’ (there was no option in that year to choose ‘both’).

(iii) the non-normative definition is far more ecumenical among moral views. Most plausible moral views would agree that there is some reason to promote the good, and that wellbeing is of some value, and therefore that the question of how one can do the most to promote welfarist value with a given unit of resources needs to be resolved as one aspect of answering the question of how to live a morally good life. In contrast, any sort of claim about our obligations to maximise the good will be much more controversial, particularly if we try to make a general statement covering people of very different income levels and personal situations. 

(iv) Finally, it focuses attention on the most distinctive aspect of effective altruism: the open question of how we can use resources to improve the world as much as possible. This question is much more neglected and arguably more important than the question of how much and in what form altruism is required of one. 


Effective Altruism as tentatively impartial and welfarist

The second controversial decision is about what we count as ‘the good’ that effective altruism is trying to promote; my proposed definition is tentatively impartial and welfarist. 

There is a wide spectrum of alternatives that I could have gone with.  On the broad end of the spectrum, we could define effective altruism as the attempt to do the most good, according to whatever view of the good the individual in question adheres to. On the narrow end of the spectrum, we could define effective altruism as the attempt to do the most good on one very particular understanding of the good, such as total hedonistic utilitarianism. Either choice faces severe problems. If we allow any view of the good to count, then white supremacists could count as practicing effective altruism, which is a conclusion that we do not want. If we restrict ourselves to one particular view of the good, then we lose any claim to ecumenicism, and we also misrepresent the effective altruism community itself, which has vibrant disagreement over what good outcomes consist in.

My preferred solution is tentative impartial welfarism, defined above. This excludes partialist views on which, for example, the wellbeing of one’s co-nationals count for more than those of foreigners and excludes non-welfarist views on which, for example, biodiversity or art has intrinsic value. But it includes utilitarianism, prioritarianism, sufficientarianism, egalitarianism, different views of population ethics, different accounts of wellbeing including views on which being able to enjoy art and a flourishing natural environment is partially constitutive of a good life, and different views of how to make comparisons of wellbeing across different species.

This welfarism is ‘tentative’, however, insofar as it is taken to be merely a working assumption. The ultimate aim of the effective altruist project is to do as much good as possible; the current focus on wellbeing rests on the idea that, given the current state of the world and our incredible opportunity to benefit others, the best ways of promoting welfarist value are broadly the same as the best ways of promoting the good. If that view changed and those in the effective altruism community were convinced that the best way to do good might well involve promoting non-welfarist goods, then I’d think we should revise the definition to simply talk about ‘doing good’ rather than ‘benefiting others’.

I believe that this understanding is supported by the views of EA leaders. In the 2015 survey of EA leaders referred to earlier, 52.5% of respondents were in favour of the definition including welfarism and impartiality, with 25% against. So the inclusion of impartial welfarism has broad support, but not as convincing support as other aspects of the definition. And, when we look at all the leading EA organisations, they are firmly focused on promoting wellbeing, rather than promoting non-welfarist sources of value. 

What’s more, this restriction does little to reduce effective altruism’s ecumenicism: wellbeing is part of the good on most or all plausible moral views. Effective altruism is not claiming to be a complete account of the moral life. But, for any view that takes us to have reasons to promote the good, and that says wellbeing is part of the good, the project of working out how we can best promote wellbeing will be important and relevant.



[1] Note that, read literally, the use of ‘benefit others’ in CEA’s definition would rule out some welfarist views, such as the view on which one can do good by creating good lives but that this does not involve benefiting those who would otherwise not exist. In this case, philosophical precision was sacrificed for readability.

Comments22
Sorted by Click to highlight new comments since:

I think this is an excellent definition.

Broadly, I understand it as:

(1) science of applying ethics given limited resources (tentatively impartial welfarist ethics)

(2) deployment of (1).

The omission of normativity has a fifth benefit of clarifying the difference from consequentialism.

What does it mean to be "pro-science"? In other words, what might a potential welfarist, maximizing, impartial, and non-normative movement that doesn't meet this criterion look like?

I ask because I don't have a clear picture of a definition that would be both informative and uncontroversial. For instance, the mainstream scientific community was largely dismissive of SIAI/MIRI for many years; would "proto-EAs" who supported them at that time be considered pro-science? I assume that excluding MIRI does indeed count as controversial, but then I don't have a clear picture of what activities/causes being "pro-science" would exclude.


edit: Why was this downvoted?

As a scientist, I consider science a way of learning about the world, and not what a particular group of people say. I think the article is fairly explicit about taking a similar definition of "science-aligned":

(i) the use of evidence and careful reasoning to work out...

(...)

  • Science-aligned. The best means to figuring out how to do the most good is the scientific method, broadly construed to include reliance on careful rigorous argument and theoretical models as well as data.

There is usually a vast body of existing relevant work on a topic across various fields of research. Trying to seriously engage with existing work is part of being scientific; and the opinions or consensus of researchers in the field are a form of data one should not ignore. You can disagree after serious consideration without being unscientific. Simply coming to your own conclusions without engaging with existing work, or acting based on emotion or gut feelings acquired without ever thinking about them critically would be unscientific.

A part of being scientific is also being open to and trying to learn from critiques of your work. It is true that scientists often make bad critiques for bad (unscientific) reasons, and it can take quite a lot of effort to understand the social and historical reasons behind consensus opinions in particular fields on particular issues. I don't think most EAs would think having a certain degree of support from a particular group of scientists is the relevant criterion.

A possible reason for the downvote is that your initial question 'What does it mean to be "pro-science"?' is explicitly answered in the article and it's not immediately clear that you are acknowledging that and really asking, isn't everything science-aligned under this definition?

My quick take onto why this was downvoted would be because someone may have glanced at it quickly and assumed you were being negative to MIRI or EA.

I think around being "Science-aligned", the post means using the principals and learnings of the scientific method and similar tools, rather than agreeing with "the majority of scientists" or similar.

The mainstream scientific community seems also likely to be skeptical of EA, but that doesn't mean that EA would have to therefore be similarly skeptical of itself.

That said, of course whether one follows the scientific method and similar for some practices, especially in cases where they aren't backed by many other communities, could be rather up for debate.

Thanks for the post, I really like the attempt to use survey data to ensure that the definition reflects the views of the leaders and members of the EA community.

I agree that the maximizing nature of effective altruism is an important part of its public value. EA has made most of its strides in my mind because it wasn't satisfied with merely providing a non-zero amount of help to people. Although we often use examples like Play Pumps that were probably net negative, the founders of GiveWell would have had a much easier time if they were just trying to find net positive charities.

However, I'm not sure that maximizing is as clearly uncontroversial as you believe. I would guess that if surveys asked about it, leaders would be fairly united behind it, but it would get something in the range 75%-50% support from the community at large.

I can do an informal poll of this group and report back.

I'd also be interested a discussion of the limits to maximizing. For example, if an EA is already working on something in the 80th percentile of effectiveness, do they find it compelling to switch to something in the 90th percentile?

My informal poll of the Effective Altruism Polls group asked

Does your working definition of effective altruism definite it as a "maximizing", at least in large part?

And got 30 votes Yes, and 3 votes No. There's various problems with the informality of the poll, but I'm updating towards this being less controversial than I thought.

I was one of the non-maximizers in the poll. To expand on what I wrote there, I hold the following interpretation of this bit of Will's article:

(i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and
(ii) the use of the findings from (i) to try to improve the world.

When I see "the use of the findings from (i)", I don't take that to necessarily mean "acting to maximize good with a high level of certainty". Instead, I interpret it as "using what we understand about maximizing good in order to make decisions that help you do more good, even if you are not actively trying to maximize your own impact".

To use an example:

Let's say that someone donates 10% of their income to AMF, because they read GiveWell's research and believe that AMF is among the best opportunities in global health/development.

Let's say that this person also hasn't carried out explicit expected-value calculations. If you ask them whether they think AMF is the best option for doing good, they say:

"Probably not. There are hundreds of plausible ways I could donate my money, and the odds that I've chosen the right one are quite low. But I don't want to spend a lot of time trying to combine my values and the available evidence to find a better option, because I don't think that I'll increase my certainty/impact enough for that use of time to feel valuable."

I still consider this person to be "practicing effective altruism". There are probably ways they could do more good even accounting for the time/energy/happiness costs of learning more. One could think of them as slightly "lazy" in their approach. Even so, they used evidence and reasoning to evaluate GiveWell's AMF writeup and determine that AMF was an unusually good option, compared to other options they knew about. They then acted on this finding to try improving the world through a donation. This feels EA-aligned to me, even if they aren't actually trying to "maximize" their impact.

There is a spectrum of "maximization", ranging from people who follow EA charity evaluators' suggestions without doing any original research to people who conduct hundreds of hours of research per year and are constantly seeking better options. I think that people in EA cover the full spectrum, so my definition of "EA" doesn't emphasize "maximizing" -- it's about "using evidence and reason to do better, even if you stop trying to do better after a certain point, as long as you aren't committing grave epistemic sins along the way".

...or something like that. I haven't yet worked this out rigorously.

After more thought in areas of definition, I've come to believe that the presumption of authority can be a bit misrepresentative.

I'm all for the coming up and encouraging of definitions of Effective Altruism and other important topics, but the phrase "The definition of effective altruism" can be seen to presuppose authority and unanimity.

I'm sure that even after this definition was proposed, alternative definitions will be used.

Of course if there were to be one authority on the topic it would be William MacAskill, but I think that even if there were only one main authority, the use of pragmatic alternative definitions could only be discouraged. It would be difficult to call them incorrect or invalid. Dictionaries typically follow use, not create it.

Also, to be clear, I have this general issue with a very great deal of literature, so it's not like I'm blaming this piece because it's particularly bad, but rather, I'm pointing it out because this piece is particularly important.

Maybe there could be a name like the "The academic definition...", "The technical definition", or "The definition according to the official CEA Ontology". Sadly these still use "The" which I'm hesitant to, but they are at least more narrow.

I've been in the EA community since 2012. As someone who has been in EA for that long, I entered the community taking to heart the intentional stance of 'doing the most good'. Back then, a greater proportion of the community wanted EA to primarily be about a culture of effective, personal, charitable giving. The operative word of that phrase is 'personal,' since even though there are foundations behind the EA movement like the Open Philanthropy Project that have a greater endowment than the rest of the EA community combined might ever hope to earn to give, for different reasons a lot of EAs still think it's important EA emphasizes a culture of personal giving regardless. I understand and respect that stance, and respect its continued presence in EA. I wouldn't even mind if it became a much bigger part of EA once again. This is a culture within EA that frames effective altruism as more of an obligation. Yet personally I believe it's more effective, and does more good, by doing so in a more diverse array of that. I am glad EA has evolved in that direction, and so I think it's fitting this definition of EA reflects that.

I've been thinking more that we may want to split up "Effective Altruism" into a few different areas. The main EA community should have an easy enough time realizing what is relevant, but this could help organize things for other communities.

As mentioned in this piece, the community's take on EA may be different from what we may want for academics. In that case one option would be to distill the main academic-friendly parts of EA into a new term in order to interface with the academic world.

I've been thinking more that we may want to split up "Effective Altruism" into a few different areas. The main EA community should have an easy enough time realizing what is relevant, but this could help organize things for other communities.

People have talked about "splitting up" EA in the past to streamline things, while other people worry about how that might needlessly balkanize the community. My own past observations of trying to 'split up' EA, into specialized compartments, it that, more than being good or bad, it doesn't have much consequence at all. So, I wouldn't recommend more EAs just make an uncritical try of doing so again, if for no other reason than it strikes me as a waste of time and effort.

As mentioned in this piece, the community's take on EA may be different from what we may want for academics. In that case one option would be to distill the main academic-friendly parts of EA into a new term in order to interface with the academic world.

The heuristic I use to think about this is to leave the management with the relationship between the EA community and "Group X", is to let members of the EA community who are part of Group X manage EA's relationship with Group X. That heuristic could break down in some places, but it seems to have worked okay so far for different industry groups. For EA to think of 'academia' as an industry like 'the software industry' is probably not the most accurate thing to do. I just think the heuristic fits because EAs in academia will, presumably, know how to navigate academia on behalf of EA better than the rest of us will.

I think what has worked best is for different kinds of academics in EA to lead the effort to build relationships with their respective specializations, within both the public and private sectors (there is also the non-profit sector, but that is something EA is basically built out of to begin with). To streamline this process, I've created different Facebook groups for networking and discussions for EAs in different respective profession/career streams, as part of a EA careers public resource sheet. It is a public resource, so please feel free to share and use it however you like.

So, I wouldn't recommend more EAs just make an uncritical try of doing so again, if for no other reason than it strikes me as a waste of time and effort.

I could imagine that making a spin-off could be pretty simple. It could take a lot of time and effort to keep all the parts integrated. While this may not have been worth it yet, if there were a time that others in the future would estimate the costs as being high to keeping things uniform, spin-offs seem pretty reasonable to me.

The heuristic I use to think about this is to leave the management with the relationship between the EA community and "Group X", is to let members of the EA community who are part of Group X manage EA's relationship with Group X.

I in general agree, though I could imagine many situations where people from CEA or similar may want to be involved somewhat to make sure things don't go wrong.

In this case, I'd assume that William MacAskill is in a really good position to appeal to much of academia. I didn't mean "absolutely all" of academia before, sorry if that wasn't clear.

Thanks for the spreadsheet by the way. How have those groups been going? It seems like an interesting project.

This is similar to how I describe effective altruism to those whom I introduce to the idea. I'm not in academia, and so I mostly introduce it to people who aren't intellectuals. However, I can trace some of the features of your more rigorous definition in the one I've been using lately. It's: " 'effective altruism' is a community and movement focused on using science, evidence, and reason to try solving the world's biggest/most important problems". It's kind of clunky, and it's imperfect, but it's what I've replaced "to do the most good" with, which generically stated presents the understandable problems you went over above.

The EA literature is missing an elevator-pitch.
As a first approximation, Id suggest:

EA is a diligent approach to charity and ethical goals

Typical interests:

• Global poverty
• Animal suffering
• Existential risks
• Earning to give

Characteristics:

• Focuses on consequences
• Maximises the good done per unit of time/money
• Impartial to location and future generations
• Attentive to opportunity costs
• Often measures benefits
• ITN framework:
— Importance eg number of lives saved, amount of suffering decreased
— Tractability: additional resources will greatly address the issue
— Neglectedness: few other people are working on the problem

https://kungfuhobbit.medium.com/effective-altruism-64ec03b00636

I have recognized the conflict that exists between determining efforts with maximal benefit against the resource of time to arrive at those conclusions. It's caused a lot of paralysis with me actually taking action. How do active EAs factor this in, especially when acting independently (though I'm hoping an EA community gets going in Detroit soon). I'm thinking of the phrase:

"don't let perfect be the enemy of good".

I've adopted to develop exclusion criterion for entryists into EA that EA, as a community, by definition, would see as bad actors, e.g., white supremacists. While one set of philosophical debates within EA, and with other communities, is how far, and how fast, the circle of moral expansion should grow. This common denominator in EA seems to imply a baseline agreement common to all of EA that we would be opposed to people who see to rapidly and dramatically shrink the circle of moral concern of the current human generation. So, to the extent someone:

1. shrinks the circle of moral concern;

2. does so to a great degree/magnitude;

3. does so very swiftly;

EA as a community should beware uncritically tolerating them as members of the community.

Hm... I appreciate what you may be getting at, but I think that post itself doesn't exactly say it's bad, but rather that the specific thing that what one does to maximize probably isn't exactly the best possible thing (though it could still be the best possible guess).

In many areas maximizing as a general heuristic is pretty great. I wouldn't mind maximizing income and happiness within reasonable limits. But maximization can of course be dangerous, as is true for many decision functions.

To say it's usually a bad idea would be to assume a reference class of possible things to maximize, which seems hard for me to visualize.

I think it would be a good idea to be more explicit that other considerations besides those from (i) can inform how we do (ii), since otherwise we're committed to consequentialism.

Also, I'm being a bit pedantic, but if the maximizing course(s) of action are ruled out for nonconsequentialist reasons, then since (i) only cares about maximization, we won't necessarily have information ranking the options that aren't (near) maximal (we might devote most of our research to decisions we suspect might be maximal, to the neglect of others), and (ii) won't necessarily be informed by the value of outcomes.

Related to this post, and relying on it, I wrote a (more informal) post detailing why to avoid normative claims in EA, which I claim explains at least an implication of the above post, if not something Will was suggesting.

Is there a way to read the finalised (instead of penultimate) article without purchasing the book? Perhaps, Will, you have a PDF copy you own?

Curated and popular this week
Relevant opportunities