If you set out to maximise the welfare of people alive today, treating them all equally, you’ll end up doing some pretty weird things. Who’d have thought “doing the most good” boiled down to handing out cash to poor Kenyan farmers?
When people see effective altruists focused on cash transfers, distributing bed-nets and cheap medicine - and claiming they’re doing the most good - there’s a common reaction:
This looks naive and narrow : Sure these interventions help the immediate beneficiaries, but they hardly look like they’re solving the world’s greatest problems.
It looks like they’ve made the mistakes of ignoring small probabilities of big upsides, focusing only on concrete outcomes, and ignored the historical record (in which science, technology and better government are some of the main drivers of progress). It also looks like they’ve completely discounted common-sense do-gooding, which is not mainly focused on global health.
Now suppose you care about both the welfare of people today *and* helping people in the future. If you care about the future, you’ll want to make investments in technology and economic growth that will pay off later. You’ll also want to make sure society is in a position to navigate unpredictable future challenges. This will mean better global institutions, smarter leaders, more social science, and so on. And it’s hard to know which of these are most pressing.1
Overall, this menu of global priorities looks much closer to common-sense efforts to make a difference. In this way, long-run focused effective altruism ends up looking more common-sense than efforts just focused on helping present generations.2
Long-run focused effective altruism is often seen as even less common-sense than the short-run focused version. But it doesn’t have to be that way. Long-run focused effective altruism only becomes unintuitive when taken to an extreme and combined with further non-common-sense beliefs, such as the belief that reducing existential risk is the best way to aid the future, and within that, the belief that artificial intelligence is the most pressing existential risk.3
Because long-run focused effective altruism is associated with these further weird positions, it’s often downplayed when speaking to new people, in favor of short-run effective altruism (malaria nets and so on). I propose that it will be better, especially for people who are already engaged with making a difference, to introduce them first to *moderate long-run focused effective altruism* rather than the short-run focused version. It’s more intuitive and reasonable sounding.
The reason this doesn’t happen already, I think, is that people aren’t sure how to explain moderate long-run focused effective altruism - it’s much easier to say “malaria nets” and direct someone to (traditional) GiveWell. But in the last year it has become much easier to explain. When it comes to picking causes, emphasise that effective altruists take a strategic approach. Yes, they consider their personal passions, but they also try to work on causes that are important, tractable and neglected. Explain that the most important causes are the ones that do the most to build a flourishing society now and in the long-run. Then give several examples: yes there’s global health (especially good on tractability), but there’s also global catastrophic risks (good on importance and neglectedness); scientific research, penal reform, and much else. Link them the Open Philanthropy Project, 80,000 Hours and the Copenhagen Consensus.
In conclusion, short-run effective altruism is often favored as more intuitive and better for introducing to new people compared to long-run focused effective altruism, because long-run focused effective altruism is often associated with further weird positions. However, a more moderate and uncertain long-run focused effective altruism is actually the most reasonable sounding position.
* * *
1 Of course, interventions which maximise short-run welfare might *also* happen to be the best-way to help the long-run future, but that’s a topic for a different day.
2 It also looks more common-sense because it involves less certainty. It’s very hard to know what the long-run effects of our actions are, so long-run focused effective altruism tends to work with a broader range of causes than the short-run focused version.
3. In fact, even if you believe both of these things, once the low-hanging fruit with friendly AI research etc. are used up, you’re going to then focus on common-sense causes like international collaboration.
Moderate long-run EA doesn't look close to having fully formed ideas to me, and therefore it seems to me a strange way to introduce people to EA more generally.
I don't understand this. Is there an appropriate research fund to donate to? Or are we talking about profit-driven capital spending? Or just going into applied science research as part of an otherwise unremarkable career?
Who knows how to make economies grow?
What is a "better" global institution, and is there any EA writing on plans to make any such institutions better? (I don't mean this to come across as entirely critical -- I can imagine someone being a bureaucrat or diplomat at the next WTO round or something. I just haven't seen any concrete ideas floated in this direction. Is there a corner of EA websites that I'm completely oblivious to? A Facebook thread that I missed (quite plausible)?)
I have even less idea of how you plan to make better politicians win elections.
More social science I can at least understand: more policy-relevant knowledge --> hopefully better policy-making.
Underlying some of what you write is, I think, the idea that political lobbying or activism (?) could be highly effective. Or maybe going into the public service to craft policy. And that might well be right, and it would perhaps put this wing of EA, should it develop, comfortably within the sort of common-sense ideas that you say it would. (I say "perhaps" because the most prominent policy idea I see in EA discussions -- I might be biased because I agree with and read a lot of it -- is open borders, which is decidedly not mainstream.)
But overall I just don't see where this hypothetical introduction to EA is going to go, at least until the Open Philanthropy Project has a few years under its belt.
To clarify, I was defining the different forms of EA more along the lines of 'how they evaluate impact', rather than which specific projects they think are best.
Short-run focused EA focuses on evaluating short-run effects. Long-run focused EA also tries to take account of long-run effects.
Extreme long-run EA combines a focus on long-run effects with other unintuitive positions such as a focus on specific xrisks. Moderate long-run EA doesn't.
The point of moderate long-run EA is that it's much less clear which interventions are best by these standards.
I was... (read more)