This post starts with an obvious claim - EAs, like all humans, are not perfect utilitarians. We are a diverse group with a broad range of motivations that include, but are not exclusively helping other people. This concept has been echoed in many previous posts on this forum. However sometimes I feel like we do not internally realize the implications of this as much as we should.
One important implications of realizing this is that people do things in the charity and EA world that are not purely for ethical reasons.If someone makes a donation that does not seem fully utilitarian, it’s not always just because of a different but justifiable epistemic worldview. Sometimes it's just satisfying another drive (e.g. prestige, loyalty, curiosity). People have talked about satisfying utils and warm fuzzies with different donations and this is a great understanding to have. I would like to see the EA community take this understanding a step further, acknowledge different motivations and figure out how to work within our very human limitations.
To give a few examples of situations I think are more strongly explained by different motivations than explained by different world understandings. Of course I think there are certain situations where they are fully explained by one or the other, but this is on average in the EA community:
A donor gives a high percentage of their income to a charity they personally are the closest to.
A charity ranks themselves as higher impact than an external reviewer would.
An EA uses inconsistent application of rigour to one cause vs another.
An EA not doing something hard but ethical seeming (e.g. vegetarianism, 10% pledge, frugality)
An EA Holds money they intend to donate later in a savings account instead of putting it into a donor advised fund.
An EA organization dismissing criticism that others in the movement think has some merit
The Gates foundation makes a large donation to something that does not seem utilitarian from many different perspectives.
I think these are generally more explained by differing drives than different world views, and that has a big effect on how seriously to take them. To use a specific example from my own life. I have two friends who eat meat. Both are familiar with the ethical arguments around the issue. One I would describe as a “guilty meat eater”; he eats meat but thinks it’s wrong ethically. He feels he does not have the willpower to change his diet. My second friend also eats meat but he claims that it's ethically neutral. However, me and others, including other omnivores who know him, suspect this view is largely affected by the fact of how much he likes meat.
In general I think the first perspective is a lot better for the world as the “guilty meat eater” does not discourage other potential vegetarians from making the switch whereas my second friend often argues with vegetarians trying to convince them eating meat is ethical (and in one case has succeed). I am sure many people have had experiences like this on various ethical issues.
How this applies to the broader EA context is often I see EA’s thinking very hard to come up with complex utilitarian explanations for why certain actors make the actions they do instead of considering the much simpler solution of multiple drives. Not every donation or career choice is made for utilitarian reasons. Realising this creates more of a cautionary feeling towards people doing “the funnest/easiest/more prestigious” option that requires a very complex or abnormal perspective. I think this can explain behavior like not donating to the same charities year after year (novelty drive), working more in causes related to other interests you have, doing considerable less action/donations/diet changes due to worries of burnout, and many other behaviors.
If we take perspectives that seem to fit a multiple drives theory well a little bit more cautiously I think the EA movement would end up at higher impact conclusions in the long run.