Comment author: tylermjohn 26 October 2017 03:49:22PM 1 point [-]

Thanks so much for this thoughtful and well-researched write-up, Kelly. The changes you recommend seem extremely promising and it's very helpful to have all of these recommendations in one place.

I think that there are some additional reasons that go beyond those stated in this post that increase the value of making the EA a more diverse and inclusive community. First, if the EA movement genuinely aspires to cause-neutrality, then we should care about benefits that accrue to others regardless of who these other people are and independent of what the causal route to these benefits is. As such, we should also care about the benefits that becoming a diverse and inclusive movement would have for women, people of color, and disabled and trans people in and outside of the community. If, as you argue and as is antecedently quite plausible, the EA movement is essentially engaging in the very same discriminatory practices in our movement-building as people tend to engage in everywhere else, then as a result we are artificially boosting the prestige, visibility, and status perception of white, cis, straight, able-bodied men, we are creating a community that is less sensitive to stereotype threat and to micro- and macroaggressions than it otherwise could be, and we are giving legitimacy to stereotypes and to business and nonprofit models which arbitrarily exclude many people. All of this causes harm or a reduction in the status or power of women, people of color, and disabled and trans people and advances their discrimination - which is a real and significant cost to organizing in this way.

Second, even if one thinks that this effect size will be very small compared to the good that the EA movement is doing (which is less obvious than EAs sometimes assume without argument), 1) these are still pure benefits, which strengthens the case for and the reasons favoring improving the EA community in the respects you argue, and 2) if the EA community fails to become more diverse and inclusive we'll suffer reputation costs in the media, in academia, among progressives, and in the nonprofit world for being a community that is exclusionary. This would come at a significant cost to our potential to build a large and sustainable movement and to create strong, elite networks and ties. And at this point, this worry is very far from a mere hypothetical:

I think we have our work cut out for us if we want to build a better reputation with the world outside of our (presently rather small) community, and that the courses of action you recommend will go quite a long way to getting us there.

Comment author: KelseyPiper 26 October 2017 06:28:46PM 25 points [-]

I just want to quickly call attention to one point: "these are still pure benefits" seems like a mistaken way of thinking about this - or perhaps I'm just misinterpreting you. To me "pure benefits" suggests something costless, or where the costs are so trivial they should be discarded in analysis, and I think that really underestimates the labor that goes into building inclusive communities. Researching and compiling these recommendations took work, and implementing them will take a lot of work. Mentoring people can have wonderful returns, but it requires significant commitments of time, energy, and often other resources. Writing up community standards about conduct tends to be emotionally exhausting work which demands weeks of time and effort from productive and deeply involved community members who are necessarily sidelining other EA projects in order to do it.

None of this is to say 'it isn't worth it'. I expect that some of these things have great returns to the health, epistemic standards, and resiliency of the community, as well as, like you mentioned, good returns for the reputation of EA (though from my experience in social justice communities, there will be articles criticizing any movement for failures of intersectionality, and the presence of those articles isn't very strong evidence that a movement is doing something unusually wrong). My goal is not to say 'this is too much work' but simply 'this is work' - because if we don't acknowledge that it requires work, then work probably will not get done (or will not be acknowledged and appreciated).

Once we acknowledge that these are suggestions which require varying amounts of time, energy and access to resources, and that they impose varying degrees of mental load, then we can start figuring out which ones are good priorities for people with limited amounts of all of the above. I've seen a lot of social justice communities suffer because they're unable to do this kind of prioritization and accordingly impose excessively high costs on members and lose good people who have limited resources.

So I think it's a bad idea to think in terms of 'pure benefit'. Here, like everywhere else, if we want to do the most good we need to keep in mind that not all actions are equally good or equally cheap so we can prioritize the effective and cheap ones.

I'm also curious why you think the magnitude of the current EA movement's contributions to harmful societal structures in the United States might outweigh the magnitude of the effects EA has on nonhumans and on the poorest humans. To be clear about where I'm coming from, I think the most important thing the EA community can do is be a community that fosters fast progress on the most important things in the world. Obviously, this will include being a community that takes contributions seriously regardless of their origins and elicits contributions from everyone with good ideas, without making any of them feel excluded because of their background. But that makes diversity an instrumental goal, a thing that will make us better at figuring out how to improve the world and acting on the evidence. From your phrasing, I think you might believe that harmful societal structures in the western world are one of the things we can most effectively fix? Have you expanded on that anywhere, or is there anyone else who has argued for that who you can point me to?


Giving What We Can Pledge thoughts

The Center for Effective Altruism commissioned me to write an essay on Giving What We Can’s pledge to give 10% of your income to the most effective charities. They did not ask that the essay have any particular contents, but I expect that I would not have agreed to write... Read More
Comment author: KelseyPiper 07 April 2016 02:50:53AM 4 points [-]

All your posts on cause prioritization have been really valuable for me but I think this is my favorite so far. It clearly motivates what you're doing and the circumstances under which we'll end up being forced to do it, it compares the result you got from using a formal mathematical model to the result you got when you tried to use your intuitions and informal reasoning on the same problem, which both helps sanity-check the mathematical model and helps make the case that it's a useful endeavor for people who already have estimates they've arrived at through informal reasoning, and it spells out the framework explicitly enough other people can use it.

I'm curious why you don't think the specific numbers can be trusted. My instincts are that the cage free numbers are dubious as to how much they improve animal lives, that your choice of prior will affect this so much it's probably worth having a table of results given different reasonable priors, and that "the value of chickens relative to humans" is the wrong way to think about how good averting chicken suffering is compared to making people happier or saving their lives (chickens are way less valuable to me than humans. chicken-torture is probably nearly as morally bad as human-torture; I am not sure that the things which make torture bad vary between chickens and people). Are those the numbers that you wanted us to flag as dubious, or were you thinking of different ones?

Comment author: Bernadette_Young 19 August 2015 08:57:19AM 6 points [-]

I think anybody wanting to raise a potentially divisive or negative discussion should think carefully about how likely a given discussion is to be self-defeating, or to yield negative results that outweigh the benefits.

The setting matters a lot to this: if you post on Facebook, the discussion gets published in lots of people's feeds in a manner that posters don't control (I find 'likes' on comments I make in the EA FB group from friends I know are not members of that group). Also, the FB policy of only allowing 'upvoting' means that the degree to which people's statements are well or badly received is not well reflected. Finally the listing of threads by order of most recent comment keeps pile-ons in the current discussion.

(This also creates an important asymmetry: those who don't care about the discussion being damaging are more likely to continue it, while those who disagree might avoid voicing their disagreement in the hopes that the thread will die away.)

This forum doesn't suffer any of those drawbacks, so I believe it is a better arena for raising these issues for discussion if you reasonably believe there is something important at stake.

Comment author: KelseyPiper 21 August 2015 04:47:53AM *  3 points [-]

I really agree here - other factors that make Facebook conversations particularly inflammatory include Facebook's lack of threading, so you can't easily see who a person is responding to and if the tone of the response is appropriate to the original post, the way Facebook comment threads rapidly stack up with hundreds of comments, some only tangentially related to the original post, and the wide variance in moderation schemes. I've been disillusioned by some of the conversations on Facebook, but this comment made me more optimistic that is a platform issue, not a problem with open discussion of EA concerns.

Comment author: tomstocker 19 August 2015 08:49:13AM 3 points [-]

Those quotes aren't real quotes right? I recognize the offensive one about 'autistic white nerds' from a wierd article by one of the Vox founders but I would have bet against much of the others being said by someone?

Comment author: KelseyPiper 19 August 2015 09:05:11PM 4 points [-]

No, sorry, they are not. And not all of these are pitfalls I've witnessed specifically in EA outreach - the atheist/skeptic community and campus conservative/libertarian groups are where I watched a lot of these mistakes get made.


Pitfalls in Diversity Outreach

(This is an adaptation of a post on my blog.) EA is one of several movements I have seen which have tried to address the problem of a lack of diversity, either demographic diversity (i.e. too many men, too many white people) or ideological diversity  (too many programmers, too many... Read More
Comment author: riceissa  (EA Profile) 05 July 2015 03:48:19AM 3 points [-]

Check whether your school explicitly desires or opposes affiliation with an outside organization.

Do you happen to know why Stanford didn't like the affiliation with external organizations?

Comment author: KelseyPiper 05 July 2015 07:07:40PM 2 points [-]

I think they were concerned that the Stanford brand name would be used for publicity and /or fundraising by organizations outside their control.

Comment author: Marcus_A_Davis 14 April 2015 05:12:15PM 5 points [-]

This is super practical advice that I can definitely see myself applying in the future. The introductions on the sheets seem particularly well-suited to getting people engaged.

Also, "What is the first thing you would do if appointed dictator of the United States?" likely just entered my favorite questions to ask anyone in ice-breaker scenarios, many of which have nothing to do with EA.

Comment author: KelseyPiper 16 April 2015 05:45:10AM 3 points [-]

The question we've had the most success with for a regular/weekly meetup is "what is something interesting you've learned/read/thought about recently". The advantage to keeping it consistent is that people know what to expect; this question also avoids most of the disadvantages of keeping the question consistent (namely that people repeat themselves and get bored). It also tends to provoke fascinating answers.


Meetup : Stanford THINK

Discussion article for the meetup : Stanford THINK WHEN: 05 October 2014 04:30:00PM (-0700) WHERE: Stanford University, Old Union rm 121 Stanford's EA group meets here weekly; please stop by! Discussion article for the meetup : Stanford THINK