This is a considered personal view, and discusses my understanding of a consensus among many core members of the movement.

Longtermism is a big new thing in effective altruism - so big, it seems to crowd out discussion of other topics. This can lead to understandable feelings of neglect and favoritism on the part of dedicated proponents of other priorities. It doesn’t mean that other areas of effective altruism are less important1 or are being discarded, but as I recall from my childhood growing up with a younger brother, older siblings don’t always appreciate the situation, even when they understand different people, and different cause areas, have different needs at different times. 

As almost everyone I’m aware of in effective altruism has made clear, there is tremendous value in improving global health and welfare, in reducing animal suffering, and in reducing extreme risks, all areas which effective altruism has long prioritized. And despite reasonable claims that each area deserves to be prioritized over the others, there are fundamental and unresolvable debates about the relative importance of different areas. Given that, moral and epistemic uncertainties should lead us to be modest about any conclusions. And according to at least some views about moral uncertainty, that means we should balance  priorities across the cause areas which cannot be directly compared.

Within each area of concern, of course, there are tremendous inequities and misallocation which Effective Altruism has only begun to address. Americans spend $60 billion per year on their pets, and at most, a few hundred million dollars on EA-oriented animal welfare. (In contrast to the $500 billion spent on animal agriculture!) Similarly, $10 trillion is spent on healthcare globally, but that overwhelmingly goes to rich countries spending on their citizenry, rather than increasing quality of life and buying QALYs in the poorest countries, focusing on the most vulnerable. There is a tremendous amount that can be done! But just like animal welfare is relatively neglected compared to spending on healthcare, existential risk reduction is neglected relative to both, and longtermist areas other than near-term existential risk are barely getting any funding at all, especially outside of EA circles.

Neglectedness is one critical part of explaining what is happening - that is, because some areas are relatively neglected, there are far larger opportunities for impact. Existential risk reduction was, historically, the focus of far less spending than the other cause areas. In EA this was presumably due to less perceived and/or actual tractability, and globally was more likely due to the same poor prioritization we see elsewhere. But as people have made stronger arguments for both importance and for tractability, and have found clear ways to actually address the problems, they have become the focus of far more effort within EA. But because of these changes, organizations within EA are also less mature, and need more attention as an area to determine what is most effective. And that goes even more for longtermism.

But these have not and will not displace other causes. Thankfully, we can keep putting money into GiveDirectly, Deworm the World, and various anti-malaria campaigns - and we have! And so have many, many non-EA donors - In 2021, GiveDirectly gave $10m/month, but USAID, the US government foreign aid program, has embraced the strategy, as had the UK’s DFID. Not only have we started chipping away at the highest leverage paces to give, but neglectedness has been decreasing. Similarly, Effective Altruists have donated tens of millions to reduce the burden of malaria, but the world has spent $4.3 billion. Recently, the Gates Foundation has committed $2b to ending Malaria. This is what winning looks like - convincing even our critics that their money should be going to effective causes, reducing neglectedness and taking up all of the most tractable opportunities.

Preventing future pandemics is another key EA cause area - and is also one that the rest of the world is getting on board with. The tragic emergence and mishandling of COVID-19 has made it clear that the things we’ve been advocating for a decade are, in fact, critical. And pushing for the most effective approaches is still an ongoing need. But EA money alone won’t get us there, so in addition to direct investments in technologies, much of the focus of EA organizations in this area is on engaging government stakeholders and raising the profile of prevention of pandemics instead of response. That requires a different sort of engagement than global poverty. And issues like this were at least a part of why many Effective Altruist organizations started pushing for career changes and focus on direct work in neglected areas, rather than donations, which were more critical for enhancing global public health and poverty reduction when EA was more money constrained.

But this brings us to the proverbial elephant that most EAs think is very rapidly approaching the room, future artificial intelligence. There is widespread agreement from critics of EA that machine learning and AI are worrying, and should be slowed or not used. The ways to address both near-term misalignment and the different “long term” risks (i.e. well within my natural lifetime,) are poorly understood and potentially critical. Maybe the risks are overstated, and both our critics and we are overreacting. But trying to figure out how to effectively address risk is absolutely a priority of Effective Altruism, and has been almost since its inception.

There is also the actual cause area of longtermism, which is far less well developed. We are fundamentally unsure about a wide variety of questions. Unlike existential risk reduction, which is a longstanding EA cause area, and does not depend on even the weakest version of longtermism, the case for affecting the long term future is still being discussed, and because of that, it is getting lots of attention - and many people are unclear about the distinction between existential risks and longtermism, a problem which should be addressed.

But the “baby” cause area is growing up in a very, very different environment than more established causes, and because EA is much better funded, more money is going to all of the causes. Givewell, which remains focused on interventions in  EA cause areas with robust evidence about effectiveness, rather than risk reduction interventions, distributed $500 million last year, and plans to distribute $1 billion yearly by 2025. On the other hand, FTX Foundation, which is more focused on risk reduction and future growth, is planning on giving at least $100m this year, and potentially far more. Most of this is focused on existential risks, but a portion is focused on epistemic institutions for the long term, moral priorities, and other key questions for longtermism.

And after all of that, there is still room for Cause X, and plenty of funding to start organizations aiming to have an outsized impact on everything from mental health, to climate, to institutional decision making. Effective Altruism is expanding in many directions. Some of these have tremendous growth potential, and could absorb tens or hundreds of millions of dollars. And if you have other ideas, promising new ideas have been funded recently, because the gate is wide open, even if the bar for entry is high.

In summary, my view is that longtermism isn’t displacing other cause areas, and shouldn't be - but that longtermism, and the earlier priority of global catastrophic and existential risk reduction, are now receiving much more attention and funding. It makes sense that this is jarring, and does reflect a change in strategy for some of EA. However, given cause neutrality we should expect changes in focus if and as tractability and neglectedness of different areas change - so I think that concern about EA abandoning its previous goals is misplaced.

Effective Altruism isn’t a family - but like a family, members won’t always agree, and unfortunately, precisely because everyone cares so much, fights can get vicious. But unlike families, effective altruism is a movement that comes together voluntarily to accomplish goals, bound by shared purposes. There is a lot of good to do in the world, and we should continue discussing how to allocate humanity’s thankfully abundant resources to do them.

  1. As noted later, I don’t think importance is obviously lower, as that depends on debated ethical claims - but as explained below, I do think that they are currently less neglected and less tractable per unit of attention, if not per dollar invested.

I would like to thank Matthew van der Merwe, and a couple other commenters for looking at an earlier draft of this post.

60

0
0

Reactions

0
0

More posts like this

Comments7
Sorted by Click to highlight new comments since: Today at 9:12 AM

I really enjoyed this post. In addition to being well-written and a nice read, it's packed full with great links supporting and contextualizing your thoughts. Given how much has been written about related topics recently, I was happy you chose to make those connections explicitly. I feel like it helped me understand where you were positioning your own arguments in the wider conversation.

each area deserves to be prioritized over the others

How do you interpret this? I interpret it as "cause area X is prioritized if marginal resources (eg. the next funding dollar) are allocated to X".

"...despite reasonable claims that each area deserves to be prioritized over the others..."

 

I am saying that there are sets of reasonable moral views which would suggest each  EA focus area is a near-complete priority. As (non-exhaustive) examples for each, person-affecting view + animals having moral weight -> prioritize animal suffering reduction, while person-affecting view + species-based moral views -> global health and welfare. (And even given moral uncertainty, the weight you assign to different moral views in, say, a moral parliament can lead to each. If you weight by current human moral views, you would likely arrive at global health and welfare, whereas if you equally weight by expressed preferences of living beings, shrimp welfare is probably dominant.)

would suggest each  EA focus area is a near-complete priority.

Sorry, I'm asking how you're defining "prioritize".

I agree with your definition - highest priority according to each group would be about the marginal dollar allocation.

As an aside, I would include a note that portfolio allocation is a slightly different problem than marginal dollar allocation - if we have more money than the amount which can be invested in the top priority, we should invest in more than one thing. And if we are at all (morally) risk averse, or accept any of several versions of how to deal with moral uncertainty, there are benefits to diversification as well, so even for the marginal dollar, more than one thing should be prioritized, i.e. the next dollar should be split.

Continuing the aside: yes, you might split the marginal dollar because of uncertainty, like playing a mixed strategy. Alternatively, you might have strongly diminishing returns, so that you go all-in on one intervention for a certain amount of funding until the marginal EV drops below that of the next best intervention, at which point you switch to funding that one; this also results in diversification.

[comment deleted]2y12
0
0
Curated and popular this week
Relevant opportunities