In many ways, most EAs are extraordinarily smart, but in one way EAs are naive. The most well known EAs have stated that the goal of EA is to minimize suffering. I can't explain this well at all, but I'm certain that is not the cause or effect of altruism as I understand it.
Consider The Giver. Consider a world where everyone was high on opiates all the time. There is no suffering or beauty. Would you disturb it?
Considering this, my immediate reaction is to restate the goal of EA as maximizing the difference between happiness and suffering. This still seems naive. Happiness and suffering are so interwoven, I'm not sure this can be done. The disappointment from being rejected by a girl may help you come to terms with reality. The empty feeling in the pit of your stomach when your fantasy world crumbles motivates you to find something more fulfilling.
It's difficult to say. Maybe one of you can restate it more plainly. This isn't an argument against EA. This is an argument that while we probably do agree on what actions are altruistic--the criteria used to explain it are overly simplified.
I don't know if there is much to be gained by having criteria to explain altruism, but I am tired of "reducing suffering." I like to think about it more as doing what I can to positively impact the world--and using EA to maximize that positivity where possible. Because altruism isn't always as simple as where to send your money.
As a ‘well-known’ EA, I would say that you can reasonably say that EA has one of two goals: a) to ‘do the most good’ (leaving what ‘goodness’ is undefined); b) to promote the wellbeing of all (accepting that EA is about altruism in that it’s always ultimately about the lives of sentient creatures, but not coming down on a specific view of what wellbeing consists in). I prefer the latter definition (for various reasons; I think it’s a more honest representation of how EAs behave and what they believe), though think that as the term is currently used either is reasonable. Although reducing suffering is an important component of EA under either framing, under neither is the goal simply to minimize suffering, and I don’t think that Peter Singer, Toby Ord or Holden Karnofsky (etc) would object to me saying that they don’t think of this as the only goal either.
Hi Will. I would be very interested to hear the various reasons you have for preferring the latter definition? I prefer the first of the two definitions that you give, primarily leaning towards it because it makes less assumptions about what it means to do good and I have a strong intuition that EA benefits by being open to all forms of doing good.