If you set out to maximise the welfare of people alive today, treating them all equally, you’ll end up doing some pretty weird things. Who’d have thought “doing the most good” boiled down to handing out cash to poor Kenyan farmers?
When people see effective altruists focused on cash transfers, distributing bed-nets and cheap medicine - and claiming they’re doing the most good - there’s a common reaction:
This looks naive and narrow : Sure these interventions help the immediate beneficiaries, but they hardly look like they’re solving the world’s greatest problems.
It looks like they’ve made the mistakes of ignoring small probabilities of big upsides, focusing only on concrete outcomes, and ignored the historical record (in which science, technology and better government are some of the main drivers of progress). It also looks like they’ve completely discounted common-sense do-gooding, which is not mainly focused on global health.
Now suppose you care about both the welfare of people today *and* helping people in the future. If you care about the future, you’ll want to make investments in technology and economic growth that will pay off later. You’ll also want to make sure society is in a position to navigate unpredictable future challenges. This will mean better global institutions, smarter leaders, more social science, and so on. And it’s hard to know which of these are most pressing.1
Overall, this menu of global priorities looks much closer to common-sense efforts to make a difference. In this way, long-run focused effective altruism ends up looking more common-sense than efforts just focused on helping present generations.2
Long-run focused effective altruism is often seen as even less common-sense than the short-run focused version. But it doesn’t have to be that way. Long-run focused effective altruism only becomes unintuitive when taken to an extreme and combined with further non-common-sense beliefs, such as the belief that reducing existential risk is the best way to aid the future, and within that, the belief that artificial intelligence is the most pressing existential risk.3
Because long-run focused effective altruism is associated with these further weird positions, it’s often downplayed when speaking to new people, in favor of short-run effective altruism (malaria nets and so on). I propose that it will be better, especially for people who are already engaged with making a difference, to introduce them first to *moderate long-run focused effective altruism* rather than the short-run focused version. It’s more intuitive and reasonable sounding.
The reason this doesn’t happen already, I think, is that people aren’t sure how to explain moderate long-run focused effective altruism - it’s much easier to say “malaria nets” and direct someone to (traditional) GiveWell. But in the last year it has become much easier to explain. When it comes to picking causes, emphasise that effective altruists take a strategic approach. Yes, they consider their personal passions, but they also try to work on causes that are important, tractable and neglected. Explain that the most important causes are the ones that do the most to build a flourishing society now and in the long-run. Then give several examples: yes there’s global health (especially good on tractability), but there’s also global catastrophic risks (good on importance and neglectedness); scientific research, penal reform, and much else. Link them the Open Philanthropy Project, 80,000 Hours and the Copenhagen Consensus.
In conclusion, short-run effective altruism is often favored as more intuitive and better for introducing to new people compared to long-run focused effective altruism, because long-run focused effective altruism is often associated with further weird positions. However, a more moderate and uncertain long-run focused effective altruism is actually the most reasonable sounding position.
* * *
1 Of course, interventions which maximise short-run welfare might *also* happen to be the best-way to help the long-run future, but that’s a topic for a different day.
2 It also looks more common-sense because it involves less certainty. It’s very hard to know what the long-run effects of our actions are, so long-run focused effective altruism tends to work with a broader range of causes than the short-run focused version.
3. In fact, even if you believe both of these things, once the low-hanging fruit with friendly AI research etc. are used up, you’re going to then focus on common-sense causes like international collaboration.
"Long-run focused effective altruism is often seen as even less common-sense than the short-run focused version."
I'd say that it is less 'common sense' as such, in terms of principles, although I agree that taking into account factors like economic/technological growth and the sustainability of civilization might lead recommending some interventions that are more broadly supported. That would be something of a coincidence, and there may also be very outlandish recommendations..
On the intuitiveness of principles, there are a many factors that separately contribute to people's intuitions.
A big part of 'common sense' for many people is focus on their own communities, and so neglect of physically and socially distant foreigners. Things like nuclear disarmament or scientific research will be less counterintuitive to many rich country citizens because of the visible impact on the welfare of their own communities.
A different but related angle is mutualism: cooperating on Prisoner's Dilemma, contributing to public goods in a way that benefits everyone, versus one-sided transfers. The costs of cutting carbon emissions, or boosting scientific research, could be allocated around the world such that everyone wins. For transfers and health aid to the poorest the mechanisms for mutual benefit are weaker and harder to implement (although possible). Immigration with taxes and transfers to approach Pareto-improvement may fit in better with the mutualistic framework.
Intuitions about sustainability over time and high average standards of living are more common than linear concern with population size (intrinsically, for a given standard of living and instrumental import).
In some cases these will tend to coincide with a long-run welfare view, and in other cases with a short-run welfare view.