Hide table of contents

This is the first out of two posts attempting to make EA strategy discussions more productive. The second post examines EA movement course corrections and where you might disagree. 

Summary

  • Following an influx of funding, media attention, and influence, the EA movement is speeding along an exciting, yet perilous, trajectory.
  • A lot of the EA community’s future impact rests on this uncertain growth going well (and thereby avoiding movement collapse scenarios). 
  • Yet, discussions or critiques of EA’s trajectory rarely feel action-guiding. Even when critiques propose course corrections that are tempting to agree with (e.g., EA should be bigger!), proposed course corrections to make EA more like X often don’t rigorously engage with the downsides of being more like X, or the opportunity cost of not being like Y. Proposals to make EA more like X also often leave me with only a vague understanding of what X looks like and how we get from here to X.
  • In hopes of making discussions of the EA community’s trajectory more productive (and to clarify my own thinking on the matter), I will lay out a series of posts that provide an overview of: 
    • (1) Important ways in which the EA movement could fail
      (2) “Domains” in which EA could make course corrections (e.g., more cause-area tailored outreach, new professional networks and events, etc.)
      (3) Key considerations that inform course corrections (i.e., places to disagree about course corrections)
      (4) Next steps to help guide the EA movement through exciting and perilous times
  • This is the first post in this series: Ways in which EA could fail. Consider it an attempt at bounding the later discussions of strategy updates. 

Ways in which EA could fail

The EA movement could collapse. Many movements before us have, and we’re not that special. But other movements, like abolitionism, have left lasting social change. In hopes of being the type of movement that does so well it doesn’t need to exist anymore, this section outlines many of the ways EA could fail. 

In this post, I’ll define “failure” as the EA/EA-adjacent ecosystem achieving substantially less impact (let's say, 30%) than it could have along some other trajectory. Note that this is a pretty broad definition. Depending on your worldview and person-affecting ethical views, failure could look more like millions or billions of people alive in the near-future suffering in ways that could have been prevented – or failure could look more like an existential catastrophe that permanently stops sentient life from achieving its full potential. 

Implicit in this definition of failure is a statement about the influence EA already has, or could grow to have: I think our ideas and resources are powerful enough to seriously influence how many beings suffer today and how many beings live beautiful lives in the future, such that there’s a massive difference between the most good a flourishing EA ecosystem could achieve (i.e., upper bound) and the good, or possibly even harm,[1] a collapsed EA ecosystem leaves us with (i.e., lower bound). 

In order to land closer to the heights of positive impact, let’s think concretely about the worst cases that we must collectively avoid. In what ways might the EA movement fail?

I identify four clusters of failures:

  1. Reputation failures: failures that result in EA losing a substantial amount of possible impact because the EA community develops a negative reputation.
  2. Resource failures: failures that result in EA losing a substantial amount of possible impact because the community becomes preventably constrained by financial, human, or infrastructure resources.
  3. Rigor failures: failures that result in EA losing a substantial amount of possible impact because EA enthusiasts aren’t thinking clearly about prioritizing problems and their solutions.
  4. Reservation failures: failures that result in EA missing a substantial amount of possible impact because we are too risk averse, or just not ambitious enough

 

But note a few caveats before I elaborate on these different failure clusters:

  • This taxonomy is imperfect. Some failures could fit into multiple categories, or don’t fit cleanly into any. I discuss other ways to group failures in the footnotes.[2] 
  • Causes of failures are likely to chain into each other across different clusters of failures. For example, diluted epistemic norms (which I categorize as rigor failure) could lead to risky unilateral moves that result in a PR scandal and media firestorm (which I categorize as a reputation failure). In turn, this could lead to a loss of people and funding support (which I categorize as a resource failure). Failures are a messy business – don’t let a nice 4R alliteration taxonomy fool you.   
  • There are subtleties within failure modes. Some of the failures I discuss below like, for example, EA becoming politicized, could further be broken down into different consequences that plausibly have a different impact (e.g., the consequences of EA becoming disliked by left or right US political parties are probably very different). Similarly, internal disenchantment and negative press are already happening at some level, so these causes of ‘failure’ are clearly a matter of degree.
  • Not all failure modes below are equally bad or equally likely. Considering both the badness and likelihood of different failures is an important next step in prioritizing which failures to course-correct away from. 
  • Forgive me for vaguely using “EA.” For the sake of simplicity – and because I really am gesturing at the whole movement – I often invoke the mysterious, definitely-not-an-agent “EA” in reference to some action. But try to avoid doing this!
  • EA could still be on its optimal trajectory despite some evidence of ‘failure’. For example, I expect at least some parts of the optimal version of the EA movement just aren’t everyone's vibe and some people will feel disenchanted by them. While I classify disenchantment as a failure mode below, the fact that we might still expect to see disenchantment in the optimal version of EA means evidence of the beginning of a failure doesn’t necessarily mean EA needs to course-correct away from that failure.[3]

Reputation failures

Reputation failures result in EA losing a substantial amount of possible impact because the community develops a negative reputation, either internally, externally, or both. 

Causes of reputation failureExamples
Media firestormSome version of the meme that “EA is just a bunch of white billionaires claiming to do good so they can feel good about themselves” catches on and lots of media outlets jump on the bandwagon. Now typical college students think EA is lame and politicians don’t want to touch EA.
Internal disenchantment  [4]Engaged EAs start feeling disenchanted or misrepresented by the overall aesthetic (e.g., elitist) or actions (e.g., free-spending) of the EA movement and distance themselves from the community. Now many cool people are missing out on valuable coordination and communication, and this process may be reinforced since those that remain in EA are the most hardcore. 
Politicization (of EA itself or core ideas)[5]Wild animal welfare or speciesism become seen as just another woke idea and honest intellectual discussion around the topic becomes increasingly difficult.
Infighting[6]Disagreements about funding and relative attention between global health and wellbeing vs. longtermist works boils over into a messy, public fracture of EA. Or this leads to internal disenchantment. 
Risky unilateral projectSomeone carelessly takes on a risky project, say middle school outreach, that has clear downside risks that have caused others to steer clear of it. The project blows up and EA’s reputation is worse for it. 
Too demanding or totalizing EA develops a public reputation as too hardcore and demanding, and people, like politicians, who could have a great deal of impact feel like they can’t be ‘kinda EA.’ Related to concerns about EA as an identity.
ScandalAn instance (or multiple instances) of sexual harassment in EA social circles cripples EAs reputation.

Resource failures

Resource failures result in EA losing a substantial amount of possible impact because the community becomes constrained by financial and/or human resources.

Causes of resource failureExamples
Running out of money

(1) Sam Bankman Fried’s and other top EA donors’ net worth plummets  because of cryptocurrency or US stock market crashes


EDIT 11/11/22: This aged annoyingly well. I expect we'll also see the ripple of a resource failure into a reputation failure, although it's as of now unclear to me how bad the reputation damage could get. 

(2) EA doesn’t keep up with insanely profitable monetary gains from narrow AI and can’t keep up with a vital compute run up for AGI.

No new talent (or evaporating old talent)For whatever reason (probably reputation collapse or infrastructure constraints), EA can’t attract – or keep – the quantity and quality of talented individuals it urgently needs (above and beyond the typical talent constraint we see today).
Inadequate infrastructure EA fails to build the scalable infrastructure you need to coordinate many people pushing in a coherent direction.

Rigor failures

Rigor failures result in EA losing a substantial amount of possible impact because EA enthusiasts aren’t thinking clearly about prioritizing problems and their solutions.

Causes of rigor failures Examples
Dilution of truth-seeking and maximizing norms(1) EA doubles in size multiple years in a row, and now more than half the conversations people have at EAGx-style events are with people brand new to EA norms. Gradually rigorous truth-seeking norms lose their footing.

(2) EA splits along cause areas or philosophical differences, and the community starts to focus on naively optimizing within certain cause areas and misses new opportunities 
Funding and status-seeking muddle intellectual rigor[7]EAs implicitly follow incentives to do what gets them in-group clout and larger paychecks (e.g., embrace longtermism) in a way that weakens general quality of thought.
Excessive deference culture[8]New EAs start assuming that old wise EAs before them already have answers, and we build our epistemic foundations and prioritization schemes on shaky foundations.
Bad actorsThe meme that EA has a lot of money to give if you know the right things to say spreads outside the community, and bad actors degrade trust networks that underlie existing coordination.
Excessive nepotismInfiltrating key EA decision-making spaces becomes too much a function of who you know, in a way that selects for qualities other than those of the best decision-makers. 
Insufficient intellectual diversityMany people throughout history have landed at different answers for what good means, yet EA uncritically concludes it has found the right answer and pushes towards it without maintaining option value. And then we realize that we missed the mark  (or, more likely, we never realize because moral philosophy literally has the worst feedback loops). 
Echo chambersEAs become siloed in their own worldviews and shared models and underweight the importance of existing power structures or things like systemic cascading risks.
Goodharting EA decision-makers begin to optimize for legible metrics that don’t actually track our overarching goals
Poor feedback loopsLongtermist work focuses too much on preparing for future success without getting feedback from the real world about whether we’re preparing in the right ways. 
Out-of-touch leadership[9]The EAs closest to leadership become isolated from the rest of the community. They lose a source of outside feedback and a check on their epistemics.

Reservation failures

Reservation failures result in EA missing a substantial amount of possible impact because we are too risk-averse, or just not ambitious enough. While reservation failures are less likely to lead to movement collapse, they are arguably more likely to increase the likelihood of an existential catastrophe – or just the preventable suffering of (m?b??t???)illions.  

Reservation failures are closely related to invisible impact loss.[10] 

Causes of reservation failuresExamples
Not enough focus on scalability[11]Too many EAs twiddle around with small-scale projects that lack the entrepreneurial mindset to deploy all the resources at our disposal (or acquire much more).
Unwillingness to trade reputation for impact EAs don’t take an action such as global talent search because it might harm our reputation (in this case elitism critiques), even if these are the types of projects we need to solve the most pressing problems. 
Overlooking serious possibility of short transformative AI timelines EAs don’t take the non-negligible chance of transformative AI arriving around 2030 seriously and we’re caught without a robust emergency plan when things start getting crazy and dangerous. 
Spending-averse EAs are overly wary of deploying large amounts of money because it’s in tension with the movement's frugal foundations. 
Not joining the adult tableEAs cautiously hold off on bringing their ideas to spaces like US politics, intergovernmental organizations (e.g., UN), or other powerful institutions because we think “we’re not ready yet.”
BureaucracyAs EA institutions mature, they start looking more like traditional orgs and become bogged down by red tape, an overemphasis on legibility (e.g., in grants), and/or the “bureaucrat’s curse” where everyone needs to sign off on everything.[12] 

Next steps re: failure modes

  • Analyze relative badness and the likelihood of failure modes to decide which to prioritize.
  • Analyze how different failure modes could play out. I picture many “failures” of EA still leaving behind a core group of people committed to maximal impartial altruism. But the damage to branding, coordination, and resources might well vary depending on what caused the failures, and I expect this to have a big influence on how much any group could accomplish. 
  • Identify what warning signs we might expect (or are already seeing) for each failure mode and some action threshold. I expect this action threshold to vary depending on people’s idealized vision for EAs trajectory, which will be the focus of the next posts.
  • Identify more EA failure modes or debate the ones I listed: What did I miss? 
  • Improve the taxonomy: Would refactoring the failure modes make them more action-guiding?
  • Proposals to make cause-areas more robust to EA brand collapse. Depending on how concerned one is about the reputation of the EA brand, there may be arguments to preemptively make work in different areas (e.g., AI safety) not too coupled to EA.

Coming soon 

UPDATE: The second post on EA movement course corrections and where you might disagree is live.

The next post, or posts, will identify:

  1.  “Domains” in which EA could make course corrections (e.g., more cause-area tailored outreach, new professional networks and events, etc.).
  2. Key considerations that inform course corrections (i.e., places to disagree about course corrections).
  3. More next steps proposals to help guide the EA movement through this exciting and perilous period of growth.

I hope to have these next sections out within the next two weeks.

  1. ^

    Examples of ways EA could have a net-negative impact is by directing finite altruistic talent in the wrong direction, permanently tainting ideas like doing good with an emphasis on rationality, or by discouraging others to tackle X important problem because we give the impression we're on it when we're really not. 

  2. ^

    Other failure mode taxonomies include:

    (1) Sequestration, Attrition, Dilution, and Distraction
    (2) Failures from pushing too hard, not pushing hard enough, or pushing in the wrong direction. 
    (3) Failures where the recognizable EA collapses before an existential catastrophe vs. failures where EA is still intact but fails to prevent an existential catastrophe 
    (4) Failures of Commission vs. omission: causing harm vs. squandering an opportunity.

  3. ^

     You can imagine that this makes it really difficult for CEA and other EA thought leaders to know when to course-correct upon criticism, especially when you factor in an uncertain switching cost one incurs trying to coordinate and implement a trajectory change.

  4. ^

     See Leaning into EA Disillusionment for more discussion 

  5. ^
  6. ^

     Linch accurately notes in comments that the consequences of infighting are more complicated than a mere reputation failure.

    See also: Will MacAskill’s writing on resentment and the Book How Change Happens by Leslie Crutchfield, which notes effective management of infighting as a consistent attribute of the most effective movements in recent US history (Chapter 5).

  7. ^

     See also: Will MacAskill’s writing on the current funding situation harming quality of thought

  8. ^

     See the epistemic deference forum tag for more discussion 

  9. ^

     Idea borrowed from this post on movement collapse scenarios

  10. ^

     For what it’s worth, I think reservation failures are the most overlooked cluster of failures. I think it’s just much easier to criticize ostensibly bad things EA does than all the invisible impact EA loses by being overly cautious or doing some things that turn people off. However, reservation failures are a delicate matter, because they are often – although not always – associated with downside risks that require serious consideration.

  11. ^

     See Will MacAskill’s writing on “Risks of Omission” for more discussion of scalability: 

    It seems to me to be more likely that we’ll fail by not being ambitious enough; by failing to take advantage of the situation we’re in, and simply not being able to use the resources we have for good ends.

  12. ^

     Credit to Nick Beckstead for "Bureaucrat’s curse"

Comments18
Sorted by Click to highlight new comments since:

I think two other plausible ways include large-scale global catastrophic risks (which are not necessarily existential[1]) and government persecution for actual or perceived wrongdoing (which is correlated with bad press but not the same thing).

Also I'd be interested in separating out infighting from "reputation failures."While some of the causal pathways for infighting leading to breakage include mainly PR/media stuff, some of it could look more like a (confusing to the outside) implosion, akin to what's happening within many leftist nonprofits.

  1. ^

    Which means I'd prefer it if EA survives even if I personally won't

This was very well put. These scenarios have always seemed to me to be the most likely ones to take down EA.

I agree large-scale catastrophic failures are an important consideration. Originally I thought all global catastrophic risks would be downstream of some reservation failure (i.e., EA didn't do enough), but now I think this categorization unrealistically estimates EA's capabilities at the moment (i.e.,  global catastrophic risks might occur despite the realistic ideal EA movement's best efforts). 

In some sense I think large scale catastrophic risks aren't super action guiding because we're victim to them despite our best efforts, which is why I didn't include them. But now I counter my own point: Large-scale catastrophic risks could be action guiding in that they indicate the importance of thinking about things like EA coordination recovery post-catastrophe. 

I'm now considering adding a fifth cluster of failure: uncaring universe failures. Failures in which EA becomes crippled from something like a global catastrophic risk despite our  best efforts. (I could also call them ruthless universe failures if I really care about my Rs). 

  • Agreement Upvote: Yeah do that
  • Disagreement Downvote: Nah

Another example of uncaring universe failures, given slow AGI timelines is if either a) the West loses Great Power conflicts or b) gets outcompeted by other powers AND c) EA does not manage to find much traction in other cultures.

(Note that you can also frame this as a diversity failure)

Phenomenal post. Nice categorization, super clear and compelling.

Thank you for leaving this comment :)

Another failure mode I couldn’t easily fit into the taxonomy that might warrant a new category:

Competency failures - EAs are just ineffective at achieving things in the world due to lack of skills (eg comms, politics, org running) or bad judgement. Maybe this could be classed as a resource failure (for failing to attract people with certain skills) or a rigor failure (for failing to develop them/learn from others). Will try to think of a title beginning with R…

Minor points:

  • I was also considering something like value failures (EAs have the wrong moral theories/values), but that could probably be classified as a failure of rigor.
  • +1 to separating internal strife and reputation risks.

EAs cautiously hold off on bringing their ideas to spaces like US politics, intergovernmental organizations (e.g., UN), or other powerful institutions because we think “we’re not ready yet.”

In my experience, the people near that area are much, much more likely to overestimate how ready they are than to underestimate. Powerful institutions tend to stay powerful through extraordinary violence, paranoia, and nontransparency.

Was going to ask if you had integrity failure or failure by capture, but I think what I had in mind in these overlaps already to a large extent with what you have under rigor failure.

I think these are largely but not entirely covered by Linch's comment's suggestions, which provide some pretty helpful evocative examples.

I do agree that capture or rupturing can happen in a broad variety of ways though.

Great post Michel. Thank you for sharing it. I look forward to your next one. 

One other area I think might be interesting to explore in reputational risk is one that plagues the for profit and NFP worlds - apathy - or the “so-what” factor. It’s an issue / risk I’ve seen time and again that seems to always reflect on a brand or product's perceived relevance or a lack thereof - which in turn speaks to the product or brand's perceived lack of value or utility. 

In my experience it’s usually caused by organisational rigidity, it's lack of genuine interest in, or understanding of, the ‘consumer’, a stagnant monoculture - like  your point on “insufficient diversity”, or an inability to accesses the creative thinking needed for innovation or product adaptation.

Sorry, I know this is a ‘movement’, but I can’t help but think of it in product terms.

Nice post! I'm looking forward to the next posts.

One question: I'm wondering why you say EAs not taking a specific action because it might harm our reputation (e.g., coming across as too elitist) is a risk (under "Unwillingness to trade reputation for impact"), while at the same time saying that coming across as too elitist is a risk (under "Internal disenchantment"). To me, these things seem in conflict with each other.

Yup, that seems like a fair critique. The taxonomies are messy and I would expect examples to overlap without respect for categorization. (I like thinking of the failure mode categorization more as overlapping clusters than discrete bins.) 

I care more about the higher-level causes of failure within some cluster of failure, and "unwillingness to trade impact"still seems sufficiently different to me than "internal disenchantment," even if I'd expect certain actions to move the needle on both. 

I think the thing to do about these failure modes is come up with policies, in writing, meant to prevent or otherwise address each one. Policies can be for individuals or groups. I wrote about this concept recently at https://forum.effectivealtruism.org/posts/7urvvbJgPyrJoGXq4/fallibilism-bias-and-the-rule-of-law

[comment deleted]0
0
0
More from michel
Curated and popular this week
Relevant opportunities