Comment author: Kelly_Witwicki 01 November 2017 12:09:10AM *  0 points [-]

I think we score quite a bit worse on "feeling" than most altruistically-driven communities and individuals, men included.

[Edit: Point being, yes we're lacking in feeling, but "thinking vs. feeling" is not a tradeoff we have to make to increase our A (or our gender parity, which isn't an inherent problem but is tightly related to our problems). EA's whole purpose is to combine both and we should aim to recruit people who score high on both, not just one or the other. Sorry for the excessive edits.]

Comment author: Michael_Wiebe 01 November 2017 09:52:46PM 0 points [-]

My understanding of Myers Briggs is that 'thinking' and 'feeling' are mutually exclusive, at least on average, in the sense that being more thinking-oriented means you're less feeling-oriented. The E vs. A framing is different, and it seems you could have people who score high in both. Is there any personality research on this?

Comment author: Michael_Wiebe 01 November 2017 09:22:41PM 2 points [-]

In all likelihood men just hide their emotions better than women

I think citing this article weakens your overall argument. The study has n=30 and is likely more of the same low-quality non-preregistered social psychology research that is driving the replication crisis. Your argument is strong enough (to think about examples of men being snarky, insulting others, engaging in pissing contests) without needing to cite some flimsy study. Otherwise, people start questioning whether your other citations are trustworthy.

Comment author: Michael_Wiebe 31 October 2017 10:12:03PM 2 points [-]

Is it true that men score higher than women in 'thinking' vs 'feeling'? If so, the EA community (being dominated by men) might be structured in ways that appeal to 'thinkers' and deter 'feelers'. To reduce the gender gap in EA, we would have to make the community be more appealing to 'feelers' (if women are indeed disproportionately 'feelers').

Comment author: oliverbramford 23 October 2017 03:54:28PM *  0 points [-]

System change causes are inherently complex and thus often appear highly intractable initially. However, with detailed systems analysis a set of viable (and perhaps novel) approaches may (sometimes) be identified, which are much more tractable than expected.

For example, the system of animal agriculture and animal product consumption is pretty complex, but ACE have done a great job to identify charities that are working very effectively on different aspects of that system (cultured meat, advocacy to corporates, promoting veganism, etc.).

Analysing a complex system in detail sheds new light on what's broken and why, and can highlight novel and neglected solutions (e.g. cultured meat) that make changing the system far more tractable.


changing the political system is highly intractable

The political systems is very complex, but we don't yet know how tractable it is. We are currently researching this at EA Geneva/Geneva Macro Labs. If we find a political systems change strategy that is even moderately tractable, I suspect it would be worth pursuing due to the magnitude of the likely flow through effects. If we change the political system to better prioritise policies, this would make changing many other important systems (economic, defence, education, etc.) way more tractable.

Comment author: Michael_Wiebe 23 October 2017 04:37:06PM 0 points [-]

For example, the system of animal agriculture and animal product consumption is pretty complex, but ACE have done a great job

But they didn't use complex systems theory, did they? They just used the regular EA framework of impact/tractability/neglectedness.

Comment author: oliverbramford 23 October 2017 03:14:55PM *  0 points [-]

while taking into account externalities (as EAs do)

I think that the current EA methodology to take into account impact externalities is incomplete. I am not aware of any way to reliably quantify flow-through effects, or to quantify how a particular cause area indirectly affects the impact of other cause areas.

The concept of total impact, if somehow integrated into our cause prioritisation methodology, may help us to account for impact externalities more accurately. I concede that total impact may be too simplistic a concept...

For what it's worth, I currently think the solution requires modelling the Earth as a complex system, clarifying top-level metrics to optimise the system for, and a probability weighted theory of change for the system as a whole.


It seems you're trying to set up a distinction between EA focusing on small issues, and systems change focusing on big issues.

I do not mean to say that EA focuses on small issues and systems change focuses on big issues. Rather, I see EA as having a robust (but incomplete) cause prioritisation methodology, and systems change having a methodology that accounts well for complexity (but neglects cause prioritisation in the context of the system of Earth as a whole).


This is pretty mystical.

On reflection, I think that conducting systems change projects in appropriate phases, with clear expectations for each phase, is a viable way to synthesis EA and systems change approaches and culture. Specifically, a substantial research phase would typically be required to understand the system before one can know what interventions to prioritise.

Comment author: Michael_Wiebe 23 October 2017 04:28:38PM 0 points [-]

For what it's worth, I currently think the solution requires modelling the Earth as a complex system, clarifying top-level metrics to optimise the system for, and a probability weighted theory of change for the system as a whole.

I'd be interested in seeing this. Do you have anything written up?

Comment author: oliverbramford 23 October 2017 11:02:55AM *  0 points [-]

the theoretical framework linked to "do the most good" already gives us a way to think about how to choose causes while taking into account inter-cause spillovers

I think impact 'spill-overs' between causes is a good representation of how most EAs mentally think about the relationship between causes and impact. However, I see this as an inaccurate representation of what's actually going on, and I suspect this leads to a substantial mis-allocation of resources.

I suspect that long term flow-through effects typically outweigh the immediate observable impact of working on any given cause (because flow-through effects accumulate indefinitely over time). 'Spill-over' suggests that impact can be neatly attributed to one cause or another, but in the context of complex systems (i.e. the world we live in), impact is often more accurately understood as resulting from many factors, including the interplay of a messy web of causes pursued over many decades.

I see 'Earth optimization', as a useful concept to help us develop our cause prioritisation methodology to better account for the inherent complexity of the world we aim to improve, better account for long run flow-through effects, and thus help us to allocate our resources more effectively as individuals and as a movement.

Comment author: Michael_Wiebe 23 October 2017 04:21:18PM 0 points [-]

'Spillover' is a common term in economics, and I'm using it interchangeably with externalities/'how causes affect other causes'.

'Spill-over' suggests that impact can be neatly attributed to one cause or another, but in the context of complex systems (i.e. the world we live in), impact is often more accurately understood as resulting from many factors, including the interplay of a messy web of causes pursued over many decades.

Spillovers can be simple or complex; nothing in the definition says they have to be "neatly attributed". But you're right, long-term flow-through effects can be massive. They're also incredibly difficult to estimate. If you're able to improve on our ability to estimate them, using complexity theory, then more power to you.

Comment author: MichaelPlant 23 October 2017 03:53:09PM 5 points [-]

Hello. I'll reply here even though my comment covers all three posts. My general worry is that I think you've drawn a distinction without a difference. I can see that 'optimise Earth' and 'do the most good' are different sentences, but I'm still not really sure what the practical differences are supposed to be between them. As a test: what would a 'do the most good-er' and an 'Earth optimiser' disagree about?

On your EA paradigm vs systems changes paradigm, I fear you've specifed EA into a particular, not very generous, way just to make your point. For instance, why can't an EA say "we should maximise our collective impact"? I agree that EAs should be thinking about co-operation and the full impact of their actions but EAs tend to be aware of this already. I don't think I know anyone who really wants to maximise their personal impact at the expense of total impact (whatever exactly this means).

Then on this post, you seem to be presenting this as if no EA has ever thought about systemic changes. But that's not true. EAs have often thought about systemic changes, they just haven't been (that) convinced by then. If you want to change people's minds on this issues, I would strongly encourage you to write up one of these areas as a cause profile and compare it to existing ones, e.g. how does it score on scale, tractability and neglectedness? I generally much prefer it if people give (at least somewhat) worked out arguments for why New Cause X is better than Old Cause Y, rather than saying only "ah, but what about New Cause X? Have you considered that?" Advocates of New Cause X tend to be able to do the best job of making the case for that cause anyway, so it makes sense for them to try to do it anyway.

Comment author: Michael_Wiebe 23 October 2017 04:07:37PM 2 points [-]

And if your disagreement is with the scale/tractability/neglectedness framework, then argue against that directly.

Comment author: Michael_Wiebe 22 October 2017 06:20:47PM 2 points [-]

In general, a cause needs to score high on each of impact, tractability, and neglectedness to be worthwhile. Getting two out of three is no better than zero out of three. You've listed causes with high impact, but they're generally not tractable. For example, changing the political system is highly intractable.

Overall, I think that EA has already incorporated the key insights from systems change, and there's no need to distinguish it as being separate from EA.

Comment author: Michael_Wiebe 22 October 2017 05:48:25PM 4 points [-]

I think the marginal vs. total distinction is confused. Maximizing personal impact, while taking into account externalities (as EAs do), will be equivalent to maximizing collective impact.

An Effective Altruist, by focusing on impact at the margin, may ask questions such as: What impact will my next $100 donation make in this charity vs that charity?

It seems you're trying to set up a distinction between EA focusing on small issues, and systems change focusing on big issues. But this is a strawman. Even if an individual makes a $100 donation, the cause they're donating to can still target a systemic issue. In any case, there are now EAs making enormous donations: "What if you were in a position to give away billions of dollars to improve the world? What would you do with it?"

This approach invites sustained collective tolerance of deep uncertainty, in order to make space for new cultural norms to emerge. Linear, black-and-white thinking risks compromising this creative process before desirable novel realities have fully formed in a self-sustaining way.

This is pretty mystical.