JW

James_Winterford

5 karmaJoined Oct 2017

Comments
4

summary: changes in people's "values" aren't the same as changes in their involvement in EA and this analysis treats the two as the same thing; also, some observations from my own friendgroup on values changes v. retention

It sounds like no differentiation between "lowered involvement in EA and change in preferences" and "lowered involvement in EA while remaining equally altruistic" was made here, given the wording used in "The Data" section.

I can think of 3 people I've known who were previously EAs (by your six-months involvement definition) and then left, but who remained as altruistic as before, and two more who really liked the core ideas but bounced off within the first month and remained as altruistic as before. There's 2 I know (both met the six months involvement measure) who left and ended up being less altruistic afterwards.

Which, really, is irrelevant, since you'd need a much more systematic effort at data collection to reach any serious conclusions about 'value drift', but changes in people's "values" aren't the same as changes in their involvement in EA. I'm sure there's some non-EA literature on values and changes in values you'd benefit from engaging with.

(The two who became less altruistic were riding a surge of hope related to transhumanism that died down with time, and they left when that went out; the other five left for some mix of disliking the social atmosphere of EA and thinking it ineffective at reaching its stated goals. These are very different types of reasons to leave EA! I put scare quotes around values and value drift because I find it more informative to look at what actions people take rather than what values are important to them.)

The most efficient point of intervention on this issue is for confident insiders to point out when a behavior has unintended consequences or is otherwise problematic.

The post mentions this. It's hard to get stable, non-superficial buy-in for this from the relevant parties; everyone wants to talk the talk. But when you do, you'll get a much different effect than you will from hiring another diversity & inclusion officer.

I know of a few Fortune 500 companies that take the idea that this stuff affects their bottom line seriously enough that people in positions of power act on it, but EA seems more like a social club.

I don’t see many people who want to figure out how much of a problem there is, and then apply e.g. utilitarianism to decide what to do about that. That would count as acting seriously.

I like Michael's distinction between the style and core of an argument. I'm editing this paragraph to clarify the way in which I'm using a few words. When I talk about whether an argument is actually combative or collaborative, I mean to indicate whether it is more effective at goal-oriented problem-solving or at achieving political ends. By politics, I mean something like "social maneuvers taken to redistribute credit, affirmation, etc. in a way that is expected to yield selfish benefit". For example, questioning the validity of sources would be combative if the basic points of an argument held regardless of the validity of those sources.

Claims like “EA would attract many additional high quality people if women were respected” or “social justice policing would discourage many good people from joining EA” are, while true, basically all combative, and the framing of effectiveness is just helping people self-deceive into thinking they’re motivated by impact or truth. They’re using a collaborative style (the style of caring about impact/truth) to do a combative thing (politics, in the wide definition of that word).

Ultimately, I can spin the observation that these things are combative into a stylistically collaborative but actually combative argument for my own agenda, so everything I’m saying is suspect. To illustrate: the EA phrase “politics is the mindkiller” is typically combatively used in this way, and I have the ability to do something similar here. “Politics is the mindkiller” is the mindkiller, but recognizing this won’t solve the problem, in the same way recognizing politics is the "mindkiller" doesn’t.

People can smell this, and they’d be right to distrust your movement’s ability to think clearly about impact, if you’re using claims of impact and clearer thinking to justify your own politics. People who are bright enough to figure this out are typically the ones I'd want to be working with.

Yeah, you all have a problem with how you treat women and other minority groups. Kelly did a lot of work in order to point out a real phenomenon, and I don’t see anyone taking her very seriously. You let people who want to disparage women get away with doing so by using a collaborative “impact and truth” discussion style to achieve combative, political aims. That’s just the way the social balance of power lies in EA. People would use “impact and inclusivity” as a collaborative style to achieve political aims if the balance of power were flipped. Plausibly there’s an intermediate sweet spot where this happens less overall, though shifting the balance of power to such spots is never a complete solution. I suspect a better approach would be to get rid of the politics first; this will make it easier to make progress on inclusivity.

The norm of letting people stylize politics with talk of impact and truth is deeply ingrained in EA. It’s best to work outside the social edifice of EA, if you want to think clearly about impact and truth. Which feels like a shame, but isn’t too bad if you take the view that good people will eventually be drawn to you if you’re doing excellent work. That was GiveWell's strategy, and it worked.