Whilst Googling around for something entirely unrelated, I stumbled on a discussion paper published in January of 2023 about Effective Altruism that argues Global Health & Wellbeing is basically a facade to get people into the way more controversial core of longtermism. I couldn't find something posted about it elsewhere on the forum, so I'll try to summarise here.

The paper argues that there is a big distinction between what they call public facing EA and Core EA. The former cares about global health and wellbeing (GH&W) whereas the latter cares about x-risks, animal welfare and "helping elites get advanced degrees" (which I'll just refer to as core topics). There are several more distinctions between public EA and core EA, e.g. about impartiality and the importance of evidence and reason. The author argues, based on quotes from a variety of posts from a variety of influential people within EA, that for the core audience, GH&W is just a facade such that EA is perceived as 'good' by the broader public, whilst the core members work on much more controversial core topics such as transhumanism that go against many of the principles put forward by GH&W research and positions. The author seems to claim that this was done on purpose and that GH&W merely exists as a method to "convert more recruits" to a controversial core of transhumanism that EA is nowadays. This substantial distinction between GH&W and core topics causes an identity crisis between people who genuinely believe that EA is about GH&W and people who have been convinced of the core topics. The author says that these distinctions have always existed, but have been purposely hidden with nice-sounding GH&W topics by a few core members (such as Yudkowsky, Alexander, Todd, Ord, MacAskill), as a transhumanist agenda would be too controversial for the public, although it was the goal of EA after all and always has been.

To quote from the final paragraph from the paper:

The ‘EA’ that academics write about is a mirage, albeit one invoked as shorthand for a very real phenomenon, i.e., the elevation of RCTs and quantitative evaluation methods in the aid and development sector. [...] Rather, my point is that these articles and the arguments they make—sophisticated and valuable as they are—are not about EA: they are about the Singer-solution to global poverty, effective giving, and about the role of RCTs and quantitative evaluation methods in development practice. EA is an entirely different project, and the magnitude and implications of that project cannot be grasped until people are willing to look at the evidence beyond EA’s glossy front-cover, and see what activities and aims the EA movement actually prioritizes, how funding is actually distributed, whose agenda is actually pursued, and whose interests are actually served.

80

0
0

Reactions

0
0
Comments10
Sorted by Click to highlight new comments since:

I skimmed through the article; thanks for sharing!

Some quick thoughts:

community-members are fully aware that EA is not actually an open-ended question but a set of conclusions and specific cause areas

  • The cited evidence here is one user claiming this is the case; I think they are wrong. For example, if there were a dental hygiene intervention that could help, let's say, a hundred million individuals and government / other philanthropic aid were not addressing this, I would expect a CE-incubated charity to jump on it immediately.
    • There are other places where the author makes what I would consider sweeping generalizations or erroneous inferences. For instance:
      • "...given the high level of control leading organizations like the Centre for Effective Altruism (CEA) exercise over how EA is presented to outsiders" — The evidence cited here is mostly all the guides that CEA has made, but I don't see how this translates to "high level of control." EAs and EA organizations don't have to adhere to what CEA suggests. 
      • "The general consensus seems to be that re-emphasizing a norm of donating to global poverty and animal welfare charities provides reputational benefits..." — upvotes to a comment ≠ general consensus. 
  • Table 1, especially the Cause neutrality section, seems to wedge a line where one doesn't exist.
  • The author acknowledges in the Methodology section that they didn't participate in EA events or groups and mainly used internet forums to guide their qualitative study. I think this is the critical drawback of this study. Some of the most exciting things happen in EA groups and conferences, and I think the conclusion presented would be vastly different if the qualitative study included this data point.
  • I don't know what convinces the article's author to imply that there is some highly coordinated approach to funnel people into the "real parts of EA." If this is true (and my tongue-in-cheek remark here), I would suggest these core people not spend>50% of the money on global health as there could be cheaper ways of maintaining this supposed illusion.

    Overall, I like the background research done by the author, but I think the author's takeaways are inaccurate and seem too forced. At least to me, the conclusion is reminiscent of the discourse around conspiracies such as the deep state or the "plandemic," where there is always a secret group, a "they," advancing their agenda while puppeteering tens of thousands of others. 

    Much more straightforward explanations exist, which aren't entertained in this study.

    EA is more centralized than most other movements, and it would be ideal to have several big donors with different priorities and worldviews. However, EA is also more functionally diverse and consists of some ten thousand folks (and growing), each of whom is a stakeholder in this endeavor and will collectively define the movement's future.

I think the strategic ambiguity that the paper identifies is inherent to EA. The central concept of EA is so broad - "maximize the good using your limited resources" - that it can be combined with different assumptions to reach vastly different conclusions. For example, if you add assumptions like "influencing the long-term future is intractable and/or not valuable", you might reach the conclusion that the best thing to do with your limited resources is to mitigate global poverty through GiveWell-recommended charities or promoting economic growth. But if you tack on assumptions like "influencing the long-term future is tractable and paramount" and "the best way to improve the future is to reduce x-risk", then you get the x-risk and AI safety agenda.

This makes it challenging and often awkward to talk about what EA focuses on and why. But it's important to avoid describing EA in a way that implies it only supports either GHWB or the longtermist agenda. The paper cites this section of the EA Hub guide for EA groups which addresses this pitfall.

That’s a pretty impressive and thorough piece of research, regardless of whether you agree with the conclusions. I think one of its central points — that x-risk/longtermism has always been a core part of the movement — is correct. Some recent critiques have overemphasised the degree to which EA has shifted toward these areas in the last few years. It was always, if not front and centre, ‘hiding in plain sight’. And there was criticism of EA for focusing on x-risk from very early on (though it was mostly drowned out by criticisms of EA’s global health work, which now seems less controversial along with some of the farmed animal welfare work being done).

If someone disagrees empirically with estimates of existential risk, or holds a person-affecting view of population ethics, the idea that it is a front for longtermism is a legitimate criticism to make of EA. Even more resources could be directed toward global health if it wasn’t for these other cause areas. A bit less reasonably, people who hold non-utilitarian beliefs might even suspect that EA was just a way of rebranding ‘total utilitarianism’ (with the ‘total’ part becoming slowly more prominent over time).

At the same time, EAs still do a lot in the global health space (where a majority of EA funding is still directed), so the movement is in a sense being condemned because it has actually noticed these problems (see the Copenhagen Interpretation of Ethics).

This isn’t to say that the paper itself is criticising EA (it seems to be more of a qualitative study of the movement).

I don't know, but this critique feels like 5 years too late. There was a time when the focus of many within EA on longtermist issues wasn't as upfront, but there's been a sustained effort to be more upfront on this and anyone who's done the intro course will know that it is a big focus of EA.

I'd love to know if anyone thinks that there are parts of this critique that hold up today. There very well might be as I've only read the summary above and not the original post.

I think it holds up. I wrote a highly upvoted post on organisations being transparent about their scope one month ago due to similar concerns.

As far as I understand, the paper doesn't disagree with this and an explanation for it is given in the conclusion:

Communication strategies such as the ‘funnel model’ have facilitated the enduring perception amongst the broader public, academics and journalists that ‘EA’ is synonymous with ‘public-facing EA’. As a result, many people are confused by EA’s seemingly sudden shift toward ‘longtermism’, particularly AI/x-risk; however, this ‘shift’ merely represents a shift in EA’s communication strategy to more openly present the movement’s core aims.

Interesting. Seems from my perspective to be a shift towards AI, followed by a delayed update on EA’s new position, followed by further shifts towards AI.

FYI this paper seems to have a really good list of EA Organisations in it. This may well come in handy!

I put the whole list in a spreadsheet for ease of use, in case anyone wants to access it in a way that is a bit more editable than a PDF: https://docs.google.com/spreadsheets/d/1KDcDVpTKylk3qP3CqLFSscWmH01AkW4LLNwjOOcWpF8/edit?usp=sharing

I also through that it was a fairly good (and concise) history of EA. I have been reading EA material for a few years now, but I haven't before seen such a clear tracing of the history of it.

Curated and popular this week
Relevant opportunities