Hide table of contents

A new framing of EA’s collective impact

There is a current theme in the EA community to focus on how to maximize the collective impact of the movement, rather than just the impact of each individual. 

But what does it mean to maximize the impact of the movement? I suggest that the mission to ‘do the most good’, in the context of our collective potential, closely approximates ‘optimizing the Earth’.

 

Why ‘do the most good’ is roughly equivalent to ‘optimize the Earth’:

-       Optimize – make as good as possible, based on evidence and reason, acknowledging the potential for ongoing improvement

-       Earth – home to every human with capacity for impact, and every known sentient being. The seat of our collective impact on the world beyond Earth.

 

Why would aiming to optimize the Earth enable us to do more good?

1)    The Earth optimization framing of the EA mission gives focus to what it means to maximize collective impact. It points to a unified outcome of our collective work, in a way that ‘do the most good’ does not. With an Earth optimization mindset, we can rationally consider:     

  • What are the meta-outcomes of an optimized Earth?      
  • What lead-metrics should we measure to track progress towards an optimized Earth?
  • What are the most impactful causes to optimize the complex system of Earth?
  • How to prioritize the best combination of causes given that progress on some causes affects the expected impact of other causes?

2)    Optimizing the Earth hints at what may be possible decades from now, beyond the current reality of the nascent EA movement. It may help the EA movement to set very long-term goals, and back-cast the roadmap to achieve them

3)    Optimizing the Earth may include increasing or maximizing the marginal impact of every individual, not just those who currently self-identify as EA. It invites us to consider a more expansive vision of our potential collective impact.

4)    Aiming to optimize the Earth will help us to identify new priority cause areas that have strategic relevance in bringing about the best world possible, but which may have limited immediate/direct impact in and of themselves.

5)    An Earth optimization methodology will give context to current priority cause areas, and help us to evaluate their strategic relevance and relative urgency, in the mission of maximizing our collective impact. This may result in non-trivial adjustments to which causes are deemed highest priority.

6)    The Earth optimization framing of the EA mission requires us to consider not only our marginal impact but also our collective, total, global impact – something that has not been much of a focus in the EA community up until now. This challenges us to develop our cause prioritization methodology to account for how different causes, when pursued in concert, can be more than the sum of their parts.

7)    Earth optimization can be approached with methodological rigor, using complexity theory and systems science. By modeling the Earth as a complex system, we may be able to develop a ‘general theory of cause prioritization’, not only to prioritize top cause areas, but also to evaluate and optimize the impact of any actor in the system of Earth.

 

This is post 1 of 3:
Post 2:
"Effective Altruism Paradigm vs Systems Change Paradigm"
Post 3: "5 Types of Systems Change Causes with the Potential for Exceptionally High Impact"

Comments5
Sorted by Click to highlight new comments since: Today at 4:12 PM

It's obviously the case that "do the most good" is equivalent to "optimize the Earth". HPMOR readers will remember: "World domination is such an ugly phrase. I prefer to call it world optimisation."

But given that they're equivalent, I don't see that changing the label offers any benefits. For example, the theoretical framework linked to "do the most good" already gives us a way to think about how to choose causes while taking into account inter-cause spillovers (corresponding to 1(iv)).

the theoretical framework linked to "do the most good" already gives us a way to think about how to choose causes while taking into account inter-cause spillovers

I think impact 'spill-overs' between causes is a good representation of how most EAs mentally think about the relationship between causes and impact. However, I see this as an inaccurate representation of what's actually going on, and I suspect this leads to a substantial mis-allocation of resources.

I suspect that long term flow-through effects typically outweigh the immediate observable impact of working on any given cause (because flow-through effects accumulate indefinitely over time). 'Spill-over' suggests that impact can be neatly attributed to one cause or another, but in the context of complex systems (i.e. the world we live in), impact is often more accurately understood as resulting from many factors, including the interplay of a messy web of causes pursued over many decades.

I see 'Earth optimization', as a useful concept to help us develop our cause prioritisation methodology to better account for the inherent complexity of the world we aim to improve, better account for long run flow-through effects, and thus help us to allocate our resources more effectively as individuals and as a movement.

'Spillover' is a common term in economics, and I'm using it interchangeably with externalities/'how causes affect other causes'.

'Spill-over' suggests that impact can be neatly attributed to one cause or another, but in the context of complex systems (i.e. the world we live in), impact is often more accurately understood as resulting from many factors, including the interplay of a messy web of causes pursued over many decades.

Spillovers can be simple or complex; nothing in the definition says they have to be "neatly attributed". But you're right, long-term flow-through effects can be massive. They're also incredibly difficult to estimate. If you're able to improve on our ability to estimate them, using complexity theory, then more power to you.

It could be a useful framing. "Optimize" to some people may imply making something already good great, such as making the countries with the highest HDI even better, or helping emerging economies to become high income, rather than helping the more suffering countries to catch up to the happier ones. It could be viewed as helping a happy person become super happy and not a sad person to become happy. I know this narrow form of altruism isn't your intention, I'm just saying that "optimize" does have this connotation. I personally prefer "maximally benefit/improve the world." It's almost the same as your expression but without the make-good-even-better connotation.

I think EA's have always thought about impact of collective action but it's just really hard, or even impossible to estimate how your personal efforts will further collective action and compare that to more predictable forms of altruism.

I like this phrasing, but maybe not for the reason you propose it.

"Doing the most good" leaves implicit what is good, but still uses a referent ("good") that everyone thinks they know what it means. I think this issue is made even clearer if we talk about "optimizing Earth" instead since optimization must always be optimizing for something. That is, optimization is inherently measured and is about maximization/minimization of some measure. Even when we try to have a generic notion of optimal we still really mean something like effective or efficient as in optimizing for effectiveness or optimizing for efficiency.

But if EA is about optimizing Earth or doing the most good, we must still tackle the problem of what is worth optimizing for and what is good. You mention impact, which also sounds a lot to me like some combination of effectiveness and productivity multiplied by effect size, yet when we are this vague that makes EA more of a productivity movement and less of a good doing movement, whatever we may think good is. The trouble is that, exposing the hollowness of ethical content in the message, it makes it unclear what things would not benefit from being part of EA.

To take a repugnant example, if I thought maximizing suffering were good, would I still be part of EA since I want to optimize the Earth (for suffering)?

The best attempt at dealing with this issue has, for me, been Brian Tomasik's looks at dealing with moral multiplicity and compromise.

Curated and popular this week