Hide table of contents

At the most recent Effective Altruism Global in San Francisco, I presented CFAR's Double Crux technique for resolving disagreements. 

For the "practice" part of the talk, I handed out a series of prompts on EA topics, to generate disagreements to explore. Several people liked the prompts a lot and asked that I make them generally available.

So here they are! Feel free to use these at EA events, as starting points for discussions or Double Cruxes.

(For Double Crux practice at least, I recommend giving each person time to consider their own answers before talking with their partner. I find that when I don't do this, participants are often influenced by each-other's beliefs and the conversation is less fruitful.)

Prompts:

You have a million dollars to donate to the organization or charity of your choice. To which organization do you give it?

 

If you could make one change to EA culture on a single dimension (more X, or less Y), what change would you make? (Another framing on this question: what annoys you about EA or EAs?)



Should EA branding ever lie or mislead in some way, in order to get people to donate to effective charities? (For instance: exaggerating the cheapness with which a human life can be saved might get people more excited about donating to the Against Malaria Foundation and similar.)

 

You’re in charge of outreach for EA. You have to choose one demographic to focus on for introducing EA concepts to, and bringing into the movement. What single demographic do you prioritize?

 

Are there any causes that should not be included as a part of EA?

 

Should EA be trying to grow as quickly as possible?

 

This is the printout that I typically use when teaching Double Crux to EAs, which includes instructions and some additional binary, non-EA questions.

15

0
0

Reactions

0
0
Comments12
Sorted by Click to highlight new comments since: Today at 2:55 PM

I think the double crux game can be good for dispute resolution. But I think generating disagreement even in a sandbox environment can be counterproductive. It's similar to how having public debates on its face appears seems like it can better resolve a dispute, but if one isn't willing to debate entirely in good faith, they can ruin the debate to the point it shouldn't have happened in the first place. Even if a disagreement isn't socially bad in that it will persist as a conflict after a failed double crux game, it could limit effective altruists to black-and-white thinking after the fact. This lends itself to an absence of the creative problem-solving EA needs.

Perhaps even more than collaborative truth-seeking, the EA community needs individual EAs to learn to think for themselves more to generate possible solutions that the community's core can't solve themselves. There are a lot of EAs who have spare time on their hands that could be better used without something to put it towards. I think starting independent projects an be a valuable use of that time. Here are some of these questions reframed to prompt effective altruists to generate creative solutions.

Imagine you've been given discretion of 10% of the Open Philanthropy Project's annual grantmaking budget. How would you distribute it?

How would solve what you see as the biggest cultural problem in EA?

Under what conditions do you think the EA movement would be justified in deliberately deceiving or misleading the public?

How should EA address our outreach blindspots?

At what rate should EA be growing? How should that be managed?

These questions are reframed to be more challenging. But that's my goal. I think many individual EAs should be challenged to generate less confused models on these topics, and from there between models is when deliberation like double crux should start. Especially if they start from a place of ignorance on current thinking on these issues in EA[1], I don't think in the span of only a couple minutes either side of a double crux game will generate an excellent but controversial hypothesis worth challenging.

The examples in the questions provided are open questions in EA EA organizations don't themselves have good answers to, and I'm sure they'd appreciate additional thinking and support building off their ideas. These aren't binary questions with just one of two possible solutions. I think using EA examples in the double crux game may be a bad idea because it will inadvertently lead EAs to come away with a more simplistic impression of these issues than they should. There is no problem with the double crux game, but maybe EAs should learn it without using EA examples.

[1] This sounds callous, but I think it's a common coordination problem we need to fix. It isn't hard, as it's actually quite easy to miss important theoretical developments that make the rounds among EA orgs but aren't broadcast to the broader movement.

I like these modified questions.

The reason why the original formulations are what they are is to get out of the trap of everyone agreeing that "good things are good", and to draw out specific disagreements.

The intention is that each of these has some sort of crisp "yes or no" or "we should or shouldn't prioritize X". But also the crisp "yes or no" is rooted in a detailed, and potentially original, model.

I strongly agree that more EAs doing independent thinking really important, and I'm very interested in interventions that push in that direction. In my capacity as a CFAR instructor and curriculum developer, figuring out ways to do this is close to my main goal.

I think many individual EAs should be challenged to generate less confused models on these topics, and from there between models is when deliberation like double crux should start.

Strongly agree.

I don't think in the span of only a couple minutes either side of a double crux game will generate an excellent but controversial hypothesis worth challenging.

I think this misses the point a little. People at EAG have some implicit model that they're operating from, even if it isn't well-considered. The point of the exercise in this context is not to get all the way to the correct belief, but rather to engage with what one thinks and what would cause them to change their mind.

This Double Crux is part of the de-confusion and model building process.

I think using EA examples in the double crux game may be a bad idea because it will inadvertently lead EAs to come away with a more simplistic impression of these issues than they should.

I mostly teach Double Crux and related at CFAR workshops (the mainline, and speciality / alumni workshops). I've taught it at EAG 4 times (twice in 2017), and I can only observe a few participants in a session. So my n is small, and I'm very unsure.

But it seems to me that using EA examples mostly has the effect of fleshing out understanding of other EA's views, more than flattening and simplifying. People are sometimes surprised by their partner's cruxes are, at least (which suggests places where a straw model is getting updated).

But, participants could also be coming away with too much of an either-or perspective on these questions.

Yeah, reading your comments has assuaged my concerns since based on your observations the sign of the consequences of double-cruxing on EA example questions seems more unclear than clearly negative, and likely slightly positive. In general it seems like a neat exercise that is interesting but just doesn't provide enough time to leave EAs with any impression of these issues much stronger than the one they came in with. I am still thinking of making a Google Form with my version of the questions, and then posing them to EAs, to see what kind of responses are generated as an (uncontrolled) experiment. I'll let you know if I do so.

You’re in charge of outreach for EA. You have to choose one demographic to focus on for introducing EA concepts to, and bringing into the movement. What single demographic do you prioritize?

What sort of discussions does this question generate? Do people mostly discuss demographics that are currently overrepresented or underrepresented in EA? If there’s a significant amount of discussion around how and why EA needs more of groups that are already overrepresented, it probably wouldn’t feel very welcoming to someone from an underrepresented demographic. You may want to consider tweaking it to something like “What underrepresented demographic do you think EA most needs more of on the margins?”

FWIW, I have similar concerns that people might interpret the question about lying/misleading as suggesting EA doesn’t have a strong norm against lying.

What sort of discussions does this question generate?

Here are demographics that I've heard people list.

  • AI researchers (because of relevance to x-risk)
  • Teachers (for spreading the movement)
  • Hedge fund people (who are rich and analytical)
  • Startup founders (who are ambitious and agenty)
  • Young people/ college students (because they're the only people that can be sold on weird ideas like EA)
  • Ops people (because 80k and CEA said that's what EA needs)

All of these have very different implications about what is most important on the margin in EA.

Aside from Ops people, I’d guess the other five groups are already strongly overrepresented in EA. This exercise may be sending an unintended message that “EA wants more of the same”, and I suspect you could tweak the question to convey “EA values diverse perspectives” without sacrificing any quality in the discussion. Over the long-term, you’ll get much better discussions because they’ll incorporate a broader set of perspectives.

I'm not sure I follow. The question asks what the participants think is most important, which may or may not be diversity of perspectives. At least some people think that diversity of perspectives is a misguided goal, that erodes core values.

Are you saying that this implies that "EA wants more of the same" because some new EA (call him Alex) will be paired with a partner (Barbra) who gives one of the above answers, and then Alex will presume that what Barbra said was the "party line" or "the EA answer" or "what everyone thinks"?

EA skews young, white, male, and quantitative. Imagine you’re someone who doesn’t fit that profile but has EA values, and is trying to decide “is EA for me?” You go to EA Global (where the audience is not very diverse) and go to a Double Crux workshop. If most of the people talk about prioritizing adding AI researchers and hedge fund people (fields that skew young, male, and quanty) it might not feel very welcoming.

Basically, I think the question is framed so that it produces a negative externality for the community. And you could probably tweak the framing to produce a positive externality for the community, so I’d suggest considering that option unless there’s a compelling reason to favor the current framing. People can have a valuable discussion about which new perspectives would be helpful to add, even if they don’t think increasing diversity of perspectives is EA’s most important priority.

I made different points, but in this comment I'm generally concerned doing something like this at big EA events could publicly misrepresent and oversimplify a lot of issues EA deals with.

These would be fun questions to chat over at an EA party. :)