In many ways, most EAs are extraordinarily smart, but in one way EAs are naive. The most well known EAs have stated that the goal of EA is to minimize suffering. I can't explain this well at all, but I'm certain that is not the cause or effect of altruism as I understand it.
Consider The Giver. Consider a world where everyone was high on opiates all the time. There is no suffering or beauty. Would you disturb it?
Considering this, my immediate reaction is to restate the goal of EA as maximizing the difference between happiness and suffering. This still seems naive. Happiness and suffering are so interwoven, I'm not sure this can be done. The disappointment from being rejected by a girl may help you come to terms with reality. The empty feeling in the pit of your stomach when your fantasy world crumbles motivates you to find something more fulfilling.
It's difficult to say. Maybe one of you can restate it more plainly. This isn't an argument against EA. This is an argument that while we probably do agree on what actions are altruistic--the criteria used to explain it are overly simplified.
I don't know if there is much to be gained by having criteria to explain altruism, but I am tired of "reducing suffering." I like to think about it more as doing what I can to positively impact the world--and using EA to maximize that positivity where possible. Because altruism isn't always as simple as where to send your money.
That was in reference to both humanity and the EA movement, but it's trivially true for the EA movement itself.
Assuming they have any kind of directed impact whatsoever, most of them want to reduce extinction risk to get humanity to the stars.
We all know what that means for the total amount of future suffering. And yes, there will be some additional "flourishing" or pleasure/happiness/wellbeing, but it will not be optimized. It will not outweigh all the torture-level suffering.
People like Toby Ord may use happiness as a rationalization to cause more suffering, but most of them never actually endorse optimizing it. People in EA generally gain status by decrying the technically optimal solutions to this particular optimization problem. There are exceptions of course, like Michael Dickens above. But I'm not even convinced they're doing their own values a favor by endorsing the EA movement at this point.
What do you think of the effort to end factory farming? Or Tomasik et al's work on wild animal suffering? Do you think these increase rather than decrease suffering?