If you set out to maximise the welfare of people alive today, treating them all equally, you’ll end up doing some pretty weird things. Who’d have thought “doing the most good” boiled down to handing out cash to poor Kenyan farmers?
When people see effective altruists focused on cash transfers, distributing bed-nets and cheap medicine - and claiming they’re doing the most good - there’s a common reaction:
This looks naive and narrow : Sure these interventions help the immediate beneficiaries, but they hardly look like they’re solving the world’s greatest problems.
It looks like they’ve made the mistakes of ignoring small probabilities of big upsides, focusing only on concrete outcomes, and ignored the historical record (in which science, technology and better government are some of the main drivers of progress). It also looks like they’ve completely discounted common-sense do-gooding, which is not mainly focused on global health.
Now suppose you care about both the welfare of people today *and* helping people in the future. If you care about the future, you’ll want to make investments in technology and economic growth that will pay off later. You’ll also want to make sure society is in a position to navigate unpredictable future challenges. This will mean better global institutions, smarter leaders, more social science, and so on. And it’s hard to know which of these are most pressing.1
Overall, this menu of global priorities looks much closer to common-sense efforts to make a difference. In this way, long-run focused effective altruism ends up looking more common-sense than efforts just focused on helping present generations.2
Long-run focused effective altruism is often seen as even less common-sense than the short-run focused version. But it doesn’t have to be that way. Long-run focused effective altruism only becomes unintuitive when taken to an extreme and combined with further non-common-sense beliefs, such as the belief that reducing existential risk is the best way to aid the future, and within that, the belief that artificial intelligence is the most pressing existential risk.3
Because long-run focused effective altruism is associated with these further weird positions, it’s often downplayed when speaking to new people, in favor of short-run effective altruism (malaria nets and so on). I propose that it will be better, especially for people who are already engaged with making a difference, to introduce them first to *moderate long-run focused effective altruism* rather than the short-run focused version. It’s more intuitive and reasonable sounding.
The reason this doesn’t happen already, I think, is that people aren’t sure how to explain moderate long-run focused effective altruism - it’s much easier to say “malaria nets” and direct someone to (traditional) GiveWell. But in the last year it has become much easier to explain. When it comes to picking causes, emphasise that effective altruists take a strategic approach. Yes, they consider their personal passions, but they also try to work on causes that are important, tractable and neglected. Explain that the most important causes are the ones that do the most to build a flourishing society now and in the long-run. Then give several examples: yes there’s global health (especially good on tractability), but there’s also global catastrophic risks (good on importance and neglectedness); scientific research, penal reform, and much else. Link them the Open Philanthropy Project, 80,000 Hours and the Copenhagen Consensus.
In conclusion, short-run effective altruism is often favored as more intuitive and better for introducing to new people compared to long-run focused effective altruism, because long-run focused effective altruism is often associated with further weird positions. However, a more moderate and uncertain long-run focused effective altruism is actually the most reasonable sounding position.
* * *
1 Of course, interventions which maximise short-run welfare might *also* happen to be the best-way to help the long-run future, but that’s a topic for a different day.
2 It also looks more common-sense because it involves less certainty. It’s very hard to know what the long-run effects of our actions are, so long-run focused effective altruism tends to work with a broader range of causes than the short-run focused version.
3. In fact, even if you believe both of these things, once the low-hanging fruit with friendly AI research etc. are used up, you’re going to then focus on common-sense causes like international collaboration.
What's the difference between moderate and long term EA? I'm guessing x-risk would be long term and perhaps some kind of research, medium term?
As your second footnote suggested, I think that the short vs long term debate really boils down to a low vs high risk one. Just like people have different tastes for risk level in their financial investments, so to in their philanthropic investments. Long term goals such as developing technology or x-risk I don't think anyone thinks are unimportant, they're simply unpredictable (high risk), whereas GiveWell-type charities don't fix systemic probs but more “safe.”
Related to #2 is diversification. People diversify their financial investments because low risk doesn't yield that much whereas high yield is also high risk, so they hold various levels of risk in their portfolio (or even within the same risk level, it's lower risk to have multiple investments, of course). This even makes sense if someone wanted to hold only high risk (long term) donations in their philanthropic portfolio: they may feel that putting all their money on, for instance, fighting corruption, is a long shot, so may feel comfort in giving half their donations to promoting morals. In this case, the risk level is still high, but it will give the donor the psychological comfort that he has TWO chances of improving the world, rather that just one!
You associate long term EA with weirdness, and I agree that the public would see AI, and some other forms of x-risk that way, but there are so many other long-term high impact pursuits that are not weird: research on behavioural economics; designing technologies that help the poor along with their distribution and marketing systems; decreasing corruption including political reform; restructuring our economic and monetary systems to make them more fair and egalitarian; promoting more moral or sustainable lifestyles like veganism; lobbying; green tech; medical tech. I don't think people would find any of these weird.
As someone on the forum stated earlier, people tend to be more motivated to make the world better, rather than deal with sad things like extreme poverty. I think that's probably true. When promoting EA, would it not be ideal to have a little “something for everybody”? ie. For those with bleeding hearts, immediate measures for helping the global poor; for tech-oriented people developing high-impact technologies; for “save the world from injustice” types, combating corruption. It's unacceptable to me that someone would reject philanthropy or direct EA altogether because the person teaching her about it was dismissive of her place on the risk-type-duration spectrum. We should be empowering everyone, not trying to get them to conform to a specific form of EA!
See my reply to pappubahry above. The distinction is between (i) short-run EA (ii) moderate long-run EA and (iii) extreme long-run EA, not short vs. medium vs. long. I agree this is confusing, sorry!
Also, I don't think the distinction boils down to high-risk vs. low-risk. It's more about what kinds of evidence you use, and maybe some questions about values too.