Comment author: e102 13 November 2017 04:25:36AM *  5 points [-]

As I understand it, there are two arguments in this article:

  • Sexual violence is bad for individuals.
  • Reducing sexual violence substantially is unlikely to be too difficult/costly.
  • Conclusion: We should generally look to evaluating/fund/spend time on solutions to sexual violence.

and

  • Sexual violence reduces EA's impact
  • Preventing sexual violence in EA is unlikely to be too difficult/costly
  • Conclusion: We should spend more effort on reducing sexual violence in EA because it will increase our effectiveness.

Sexual Violence in the world

On funding/spending time on sexual violence reduction programs generally. We all agree that sexual violence is bad. The question is whether there are cost-effective ways to tackle it. Your statistics indicate that rape has a 1 in 208 chance of leading to death. Let's adjust that figure for the suffering rape causes even when non-fatal and say that 100 rapes are as bad as 1 death. We can currently save a life or equivalent for $1700 deworming givewell analysis. Assuming you agree with my rape to death badness ratio, that would imply that a rape prevention program would have to prevent 100 rapes for $1700, or one rape per 17$, with a high degree of certainty to be competitive with our current best option. While I don't think that is impossible, I also don't think there's any strong evidence in the article that this is the case.

As for the more meta level claim that the EA community should devote more resources/time to research in the area. I agree that while there is a lot of attention given to the issue, very little evaluation of program effectiveness is currently being done. I agree that this means there is likely a great deal of low hanging fruit for EA in terms of redirecting funding to more effective interventions. I'm just not sure that sexual violence is a better investment of our time or attention than other problems such as ethnic violence/warfare, drugs, crime, environmental damage, mental health, AI etc..

Sexual Violence within EA

On reducing sexual violence in the EA community. I think there are a few major issues with your analysis:

  • You assume that EA's are about as likely to experience sexual violence as the population norm. I'm not sure this is justified, but others have already commented on this so I won't repeat it here.
  • An extreme focus on sexual violence prevention within EA (sting operations, consent training, profiling etc..) may repel potential members if it creates a perception that sexual violence is a significant problem in the community or that EA is dominated by the far-left.
  • Your policy recommendations contain a number of suggestions that seem likely to be ineffective, legally dangerous and morally dubious.
    • 3: Sting operations. They expose anyone participating in them to massive liability. By running one, you are at the very least knowingly putting another person in a situation where you suspect sexually assaulted is likely. You are likely recording someone without their consent, a crime in some jurisdictions, or not doing so and hence having no evidence even if the sting is successful. You're also creating significant reputational damage for the employee/person in question at the point at which you have an operation involving a significant number of other employees and superiors conspire against them around the shared belief that they are a sex offender. At the very least this opens you up for civil liability for libel/defamation/harassment at work. It may well constitute criminal harassment depending on the jurisdiction. On top of the legal risks, these kind of operations in an NGO could have a severely negative reputational effect.
    • 5&7 : robust sex offender detection strategy\minimising bad attitudes. We can take into account behavioural risk factors such as whether the person believes rape myths. We can then tweak a probability further using personality research. *Male patriarchal values [66] Men's acceptance of traditional sex roles * This is profiling and, while possibly effective, is morally dubious. If being introverted increases risk of sexual assault, does that mean we should avoid hiring introverts or letting them into EA? What if devout Muslims/Christians/Xs have an increases rate of sexual assault? What about race? What about political opinions, gender, age, sex, IQ, nationality, etc.. A general moral principle I stand by is that we should treat people as individuals and judge them by their own actions rather than by those of others who share traits with them. Discrimination based on group level risks violates this principle and hence is morally unacceptable to me in all except the most extreme situations. Admittedly, whether you feel the same way depends on your moral intuitions, which may well differ from mine.
Comment author: casebash 18 November 2017 11:19:37PM 0 points [-]

"Let's adjust that figure for the suffering rape causes even when non-fatal and say that 100 rapes are as bad as 1 death." - that seems like an unrealistically low figure given that rape can lead to trauma that takes years to get over or derail someone's life.

Comment author: casebash 17 November 2017 07:12:44AM 2 points [-]

I'm not sure how useful this data is given that there are major distribution effects. ie. If I distribute the survey through Less Wrong, I'll find a lot of people who first heard of the movement through Less Wrong, ect.

Comment author: kastrel  (EA Profile) 14 November 2017 10:47:53AM 3 points [-]

Thanks! I really didn't want it to be boring and dry, and I'm not on here a lot so I though having a face to put to the blog would help.

How thorough you need to be absolutely depends on what you're working on - obviously if you're writing a literature review for publication you need to do a bit more due diligence than if you're just looking for the next thing to read. I would recommend Semantic Scholar as a more finely-tuned alternative to Google Scholar while still having a lot of free content.

Comment author: casebash 15 November 2017 03:24:41AM 0 points [-]

"I would recommend Semantic Scholar as a more finely-tuned alternative to Google Scholar while still having a lot of free content" - any specific ways in which it works better?

Comment author: casebash 01 November 2017 03:45:15AM *  6 points [-]

I've heard quite a few people say that they were wary about this kind of public outreach because they thought it might politicise the issue and do more harm than good. I'm not saying that this is my position, but what are your thoughts on stopping this from happening?

Further, it isn't clear from the above what kind of political action you intend to push for.

Comment author: xccf 29 October 2017 12:00:44AM 0 points [-]

Do you know of any spaces that don't have the problem one way or the other?

Comment author: casebash 29 October 2017 03:37:27AM *  1 point [-]

I would say that EA/Less Wrong are better in that any controversial claim you make is likely to be torn to shreds.

Comment author: ateabug 28 October 2017 10:35:43PM *  2 points [-]

I'd like to point out that the main post is written in a somewhat "culture war"-y style, which is why it has attracted so many comments/criticisms (and within 3 days it already has more comments than any other thread one these forums, ever, as far as I can tell). Here's a somewhat similar thread that makes some good suggestions about diversity without getting too much into politics: http://effective-altruism.com/ea/mp/pitfalls_in_diversity_outreach/ (also take a look at the top comment).

Comment author: casebash 29 October 2017 12:17:45AM 2 points [-]

Yeah, the original post was much more culture war-y, but fortunately Kelly edited it to make it less so.

Comment author: Jon_Behar 28 October 2017 02:24:05PM 25 points [-]

Opinions mine, not my employer’s.

Very important article Kelly, thanks for writing! I don’t agree with 100% of your diagnoses or prescriptions (honestly I rolled my eyes at some of them), but absolutely share your concern that a lack of gender and racial diversity is hurting EA. I’d also add age diversity to the mix, and in my experience (which I doubt is unique) this issue interacts with the gender and racial issues in a problematic way.

Back in my 20s, I would have brushed off and rationalized away your diversity concerns. At that time, I was the type of person over-represented in EA: young, male, studied econ at an elite school, working as a hedge fund quant in an explicitly hyper-rational and confrontational work environment, maximum “thinker” assessment on the Myers-Briggs thinker vs. feeler spectrum, etc. Many (probably “most”, or even “almost all”) of my friends and co-workers fit the same description. And I placed a very high value on my opinion, and the opinions of people like me.

Now I’m pushing 40, and I’m still a quanty, thinker vs. feeler guy with a blunt communication style. But I’ve acquired a valuable perspective on just how stupid really smart 20 somethings can be. When you work at a place that hires lots of people that fit the same profile year after year, certain patterns become obvious. You see the first year analyst class making the same mistakes each year, and realize they’re the same mistakes you and your cohorts made when you were first year analysts. You see that some people, with impeccable backgrounds/resumes, simply aren’t very good at their jobs for a variety of reasons. It turns out that even really really smart people mess up in very systematic ways. For instance, the type of people overrepresented at EA (myself included) generally aren’t that great at being humble (probably because of all the good grades and accomplishments). They also undervalue people skills- until I was lucky enough to meet an enormously talented salesperson and watch him build and nurture relationships that were critical to landing many multibillion dollar accounts, I thought the marketers were just people who couldn’t hack the math to do real finance work. I’m sure I still carry this bias to some degree.

When I was younger, I would have fallen in the “sure EA is homogeneous, but can you prove that’s a problem?” camp. With another ~15 years of perspective, I think that gets the burden of proof backwards. We’ve already experienced some of the negatives- remember when an EA journalist went to EA Global and felt a big part of the story was EA naiveté? We know the EA community and its leadership disproportionately represent populations who systematically lack humility (the “best and brightest”), experience (the young), and access to alternative perspectives (the women, people of color, people who remember the 70s, etc. who are mission aligned but think EA is too much work to interact with). That’s a lot of red flags (and FWIW most of my background is in risk management).

So now I’ve come around to the view that the EA community should seek out low cost ways to improve diversity (e.g. limiting jargon), and at least weigh the costs of changes that could significantly improve diversity (e.g. a community diversity officer). And if people want to argue that the lack of diversity in EA isn’t a problem, I think the burden of proof is clearly on them.

I’m amazed and inspired by all the young EAs who want to make the world a better place- I spent my time in college getting drunk at my frat, not reading 80,000 hours. The last thing I want to is discourage any of them. And I’m still kind of young and plenty dumb. So please just consider this a perspective to consider, and an endorsement of the principle of considering different perspectives.

Comment author: casebash 29 October 2017 12:06:57AM 3 points [-]

"I think that gets the burden of proof backwards" - I agree that claiming that there are some ways in which we could improve diversity is really an anti-prediction. On the other hand for any specific that we should do X, the burden of proof is on the person who wants us to do it.

Comment author: thebestwecan 28 October 2017 02:11:28PM *  0 points [-]

Yeah, I don't think downvotes are usually the best way of addressing bad arguments in the sense that someone is making a logical error, mistaken about an assumption, missing some evidence, etc. Like in this thread, I think that's leading to dogpiling, groupthink, and hostility in a way that outweighs the benefit of downvoting from flagging bad arguments when thoughtful people don't have time to flag them via a thoughtful comment.

I think downvotes are mostly just good for bad comments in the sense that someone is purposefully lying, relying on personal attacks instead of evidence, or otherwise not abiding by basic norms of civil discourse. In these cases, I don't think the downvoting comes off as nearly as hostile.

If you agree with that, then we must just disagree on whether examples (like my downvoted comment above) are bad arguments or bad comments. I think the community does pretty often downvote stuff it shouldn't.

Comment author: casebash 28 October 2017 11:58:51PM *  0 points [-]

Hmm, part of the problem is that downvotes are overloaded. They can either indicate:

  • This is a good comment OR
  • This is a good policy

I don't think that people think it is a bad comment, they just think it is a bad policy.

Comment author: casebash 28 October 2017 01:56:39PM *  1 point [-]

"The best way to find this out is from people who don’t want to be involved with EAA and are critical of it" - there is a real significant problem here in that what people say they care about often isn't very indicative of actions. Like anyone strongly aligned with social justice will be strongly pushed by their world view to say both that they dislike the lack of diversity initiatives and that they would be more likely to become involved if they were put in place, but this is independent of any effect on behaviour.

Comment author: xccf 28 October 2017 01:41:12AM *  5 points [-]

I dearly hope we never become one of those parts of the internet.

Me too. However, I'm not entirely clear what incentive gradient you are referring to.

But I do see an incentive gradient which goes like this: Most people responding to threads like this do so in their spare time and run on intrinsic motivation. For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion. There's a small population motivated the opposite way, but since people find it less intrinsically motivating to hang out in groups where their viewpoint is a minority, those people gradually drift off. The end result is a forum where papers that point to liberal conclusions get torn apart, and papers that point the other way get a pass.

As far as I can tell, essentially all online discussions of politicized topics fall prey to a failure mode akin to this, so it's very much something to be aware of.

Full disclosure: I'm not much of a paper scrutinizer. And the way I've been behaving in this thread is the same way Kelly has been. For example, I linked to Bryan Caplan's blog post covering a paper on ideological imbalance in social psychology. The original paper is 53 pages long. Did I read over the entire thing, carefully checking for flaws in the methodology? No, I didn't.

I'm not even sure it would be useful for me to do that--the best scrutinizer is someone who feels motivated to disprove a paper's conclusion, and this ideological imbalance paper very much flatters my preconceptions. But the point is that Kelly got called out and I didn't.

I don't know what a good solution to this problem looks like. (Maybe LW 2.0 will find one.) But an obvious solution is to extend special charity to anyone who's an ideological minority, to try & forestall evaporative cooling effects. [Also could be a good way to fight ingroup biases etc.]

As a side note, I suspect we should re-allocate resources away from social psychology as a resolution for SJ debates, on the margin. It provides great opportunities for IQ signaling, but the flip side is the investment necessary to develop a well-justified opinion is high--I don't think social psych will end up solving the problem for the masses. I would like to see people brainstorm in a larger space of possible solutions.

Comment author: casebash 28 October 2017 08:34:22AM 1 point [-]

I actually tend to observe the other effect in most intellectual spaces. Any liberal supporting result will get a free pass and be repeated over and over again, while any conservative leaning claim will be torn to shreds. Of course, you'll see the opposite if you hang around the 50% of people who voted Trump, but not many of them are in the EA community.

View more: Next