EAs are not perfect utilitarians

This post starts with an obvious claim - EAs, like all humans, are not perfect utilitarians. We are a diverse group with a broad range of motivations that include, but are not exclusively helping other people. This concept has been echoed in many previous posts on this forum. However sometimes I feel like we do not internally realize the implications of this as much as we should.

One important implications of realizing this is that people do things in the charity and EA world that are not purely for ethical reasons.If someone makes a donation that does not seem fully utilitarian, it’s not always just because of a different but justifiable epistemic worldview. Sometimes it's just satisfying another drive (e.g. prestige, loyalty, curiosity). People have talked about satisfying utils and warm fuzzies with different donations and this is a great understanding to have. I would like to see the EA community take this understanding a step further, acknowledge different motivations and figure out how to work within our very human limitations.

To give a few examples of situations I think are more strongly explained by different motivations than explained by different world understandings. Of course I think there are certain situations where they are fully explained by one or the other, but this is on average in the EA community:

  • A donor gives a high percentage of their income to a charity they personally are the closest to.

  • A charity ranks themselves as higher impact than an external reviewer would.

  • An EA uses inconsistent application of rigour to one cause vs another.

  • An EA not doing something hard but ethical seeming (e.g. vegetarianism, 10% pledge, frugality)

  • An EA Holds money they intend to donate later in a savings account instead of putting it into a donor advised fund.

  • An EA organization dismissing criticism that others in the movement think has some merit

  • The Gates foundation makes a large donation to something that does not seem utilitarian from many different perspectives.


I think these are generally more explained by differing drives than different world views, and that has a big effect on how seriously to take them. To use a specific example from my own life. I have two friends who eat meat. Both are familiar with the ethical arguments around the issue. One I would describe as a “guilty meat eater”; he eats meat but thinks it’s wrong ethically. He feels he does not have the willpower to change his diet. My second friend also eats meat but he claims that it's ethically neutral. However, me and others, including other omnivores who know him, suspect this view is largely affected by the fact of how much he likes meat.


In general I think the first perspective is a lot better for the world as the “guilty meat eater” does not discourage other potential vegetarians from making the switch whereas my second friend often argues with vegetarians trying to convince them eating meat is ethical (and in one case has succeed). I am sure many people have had experiences like this on various ethical issues.


How this applies to the broader EA context is often I see EA’s thinking very hard to come up with complex utilitarian explanations for why certain actors make the actions they do instead of considering the much simpler solution of multiple drives. Not every donation or career choice is made for utilitarian reasons. Realising this creates more of a cautionary feeling towards people doing “the funnest/easiest/more prestigious” option that requires a very complex or abnormal perspective. I think this can explain behavior like not donating to the same charities year after year (novelty drive), working more in causes related to other interests you have, doing considerable less action/donations/diet changes due to worries of burnout, and many other behaviors.  


If we take perspectives that seem to fit a multiple drives theory well a little bit more cautiously I think the EA movement would end up at higher impact conclusions in the long run.  

Comments (5)

Comment author: vollmer 06 February 2017 12:31:44PM 2 points [-]

Given the diversity of ethical views in the EA movement, I think "EAs are not entirely altruistic" would be a more appropriate title than "EAs are not perfect utilitarians".

Comment author: saraedward 30 January 2017 09:41:10AM 2 points [-]

good article

Comment author: Evan_Gaensbauer 30 January 2017 09:23:42PM 1 point [-]

If effective altruists aren't perfect utilitarians because they're human, and humans can't be perfect utilitarians because they're human, maybe the problem is effective altruists trying to be perfect utilitarians despite their inability to do so, and that's why they make mistakes. What do you think of that?

Comment author: Vidur_Kapur  (EA Profile) 31 January 2017 04:48:12PM *  3 points [-]

I don't think this gets us very far. You're making a utilitarian argument (or certainly an argument consistent with utilitarianism) in favour of not trying to be a perfect utilitarian. Paradoxically, this is what a perfect utilitarian would do given the information that they have about their own limits - they're human, as you put it. For someone such as myself who believes that utilitarianism is likely to be objectively true, therefore, I already know not to be a perfectionist.

Ultimately, Singer put it best: do the most good that you can do.

Comment author: DavidNash 30 January 2017 02:34:17PM *  1 point [-]

I think the easiest way I have for explaining why people don't do the things that others expect of them, and may assume laziness or immorality, is to first ask themselves why they don't donate 1% more or give one more hour or spend more time researching.

This probably goes together with people having different weighting for different causes.