23

Kelly_Witwicki comments on Why & How to Make Progress on Diversity & Inclusion in EA - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (229)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kelly_Witwicki 27 October 2017 10:07:48PM -1 points [-]

Thanks, clarified.

Comment author: Buck 27 October 2017 11:25:04PM 21 points [-]

Even after clarification, your sentence is misleading. The true thing you could say is "Among outsiders to projects, women are more likely to have their contributions accepted than men. Both men and women are less likely to have their contributions accepted when their genders are revealed; the effect was measured to be a percentage point different between the genders and may or may not be statistically significant. There are also major differences between the contribution patterns of men and women."

As a side note, I find the way you're using social science quite frustrating. You keep claiming that social science supports many of your particular beliefs, and then other people keep digging into the evidence and pointing out the specific reason that the evidence you've presented isn't very convincing. But it takes a lot of time to rebut all of your evidence that way, much more time than it takes for you to link to another bad study.

Comment author: Buck 27 October 2017 11:37:58PM *  18 points [-]

This is a similar issue that's going on in another thread where people feel you're cherrypicking results rather than sampling randomly in a way that will paint an accurate picture. Perhaps this dialogue can help to explain the concerns that others have expressed:

Person One: Here are 5 studies showing that coffee causes cancer, which suggests we should limit our coffee consumption.

Person Two: Actually if you do a comprehensive survey of the literature, you'll fine 3 studies showing that coffee causes cancer, 17 showing no effect, and 3 showing the coffee prevents cancer. On balance there's no stronger evidence that coffee causes cancer than that it prevents it, and in fact it probably has no effect.

Person One: Thanks for the correction! [Edits post to say: "Here are 3 studies showing that coffee causes cancer, which suggests we should limit our coffee consumption."]

Person Two: I mean... that's technically true, but I don't feel the problem is solved.

Comment author: xccf 28 October 2017 12:16:57AM 3 points [-]

As a side note, I find the way you're using social science quite frustrating. You keep claiming that social science supports many of your particular beliefs, and then other people keep digging into the evidence and pointing out the specific reason that the evidence you've presented isn't very convincing. But it takes a lot of time to rebut all of your evidence that way, much more time than it takes for you to link to another bad study.

To be charitable to Kelly, in most parts of the internet, a link to popular reporting on social science research is a high quality argument. I can understand how it might be frustrating for people to tell you you need to up your paper scrutinizing game while you are busy trying to respond to an entire thread full of people expressing disagreement.

Comment author: ClaireZabel 28 October 2017 12:47:00AM 17 points [-]

To be charitable to Kelly, in most parts of the internet, a link to popular reporting on social science research is a high quality argument.

I dearly hope we never become one of those parts of the internet.

And think we should fight against every slip down that terrible incentive gradient, for example by pointing out that the bottom of that gradient is a really terribly unproductive place, and by pushing back against steps down that doomy path.

Comment author: xccf 28 October 2017 01:41:12AM *  6 points [-]

I dearly hope we never become one of those parts of the internet.

Me too. However, I'm not entirely clear what incentive gradient you are referring to.

But I do see an incentive gradient which goes like this: Most people responding to threads like this do so in their spare time and run on intrinsic motivation. For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion. There's a small population motivated the opposite way, but since people find it less intrinsically motivating to hang out in groups where their viewpoint is a minority, those people gradually drift off. The end result is a forum where papers that point to liberal conclusions get torn apart, and papers that point the other way get a pass.

As far as I can tell, essentially all online discussions of politicized topics fall prey to a failure mode akin to this, so it's very much something to be aware of.

Full disclosure: I'm not much of a paper scrutinizer. And the way I've been behaving in this thread is the same way Kelly has been. For example, I linked to Bryan Caplan's blog post covering a paper on ideological imbalance in social psychology. The original paper is 53 pages long. Did I read over the entire thing, carefully checking for flaws in the methodology? No, I didn't.

I'm not even sure it would be useful for me to do that--the best scrutinizer is someone who feels motivated to disprove a paper's conclusion, and this ideological imbalance paper very much flatters my preconceptions. But the point is that Kelly got called out and I didn't.

I don't know what a good solution to this problem looks like. (Maybe LW 2.0 will find one.) But an obvious solution is to extend special charity to anyone who's an ideological minority, to try & forestall evaporative cooling effects. [Also could be a good way to fight ingroup biases etc.]

As a side note, I suspect we should re-allocate resources away from social psychology as a resolution for SJ debates, on the margin. It provides great opportunities for IQ signaling, but the flip side is the investment necessary to develop a well-justified opinion is high--I don't think social psych will end up solving the problem for the masses. I would like to see people brainstorm in a larger space of possible solutions.

Comment author: ClaireZabel 28 October 2017 02:19:14AM *  9 points [-]

The incentive gradient I was referring to goes from trying to actually figure out the truth to using arguments as weapons to win against opponents. You can totally use proxies for the truth if you have to(like an article being written by someone you've audited in the past, or someone who's made sound predictions in the past). You can totally decide not to engage with an issue because it's not worth the time.

But if you just shrug your shoulders and cite average social science reporting on a forum you care about, you are not justified in expecting good outcomes. This is the intellectual equivalent of catching the flu and then purposefully vomiting into the town water supply. People that do this are acting in a harmful manner, and they should be asked to cease and desist.

the best scrutinizer is someone who feels motivated to disprove a paper's conclusion

The best scrutinizer is someone that feels motivated to actually find the truth. This should be obvious.

For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion.

Yet EAs are mostly liberal. The 2017 Survey had 309 EAs identifying as Left, 373 as Centre-Left, 4 identifying as Right, 31 as Centre Right. My contention is that this is not about the conclusions being liberal. It's about specific studies and analyses of studies being terrible. E.g. (and I hate that I have to say this) I lean very socially liberal on most issues. Yet I claim that the article Kelly cited is not good support for anyone's beliefs. Because it is terrible, and does not track the truth. And we don't need writings like that, regardless of whose conclusions they happen to support.

Comment author: ClaireZabel 28 October 2017 02:32:31AM *  2 points [-]

[random] I find the survey numbers interesting, insofar as they suggest that EA is more left-leaning than almost any profession or discipline.

(see e.g. this and this).

Comment author: xccf 28 October 2017 02:49:53AM 1 point [-]

The best scrutinizer is someone that feels motivated to actually find the truth. This should be obvious.

How does "this should be obvious" compare to average social science reporting on the epistemic hygiene scale?

Like, this is an empirical claim we could test: give people social psych papers that have known flaws, and see whether curiosity or disagreement with the paper's conclusion predicts flaw discovery better. I don't think the result of such an experiment is obvious.

Comment author: ClaireZabel 28 October 2017 04:17:22AM 1 point [-]

Flaws aren't the only things I want to discover when I scrutinize a paper. I also want to discover truths, if they exist, among other things

Comment author: casebash 28 October 2017 08:34:22AM 2 points [-]

I actually tend to observe the other effect in most intellectual spaces. Any liberal supporting result will get a free pass and be repeated over and over again, while any conservative leaning claim will be torn to shreds. Of course, you'll see the opposite if you hang around the 50% of people who voted Trump, but not many of them are in the EA community.

Comment author: xccf 29 October 2017 12:00:44AM 0 points [-]

Do you know of any spaces that don't have the problem one way or the other?

Comment author: casebash 29 October 2017 03:37:27AM *  2 points [-]

I would say that EA/Less Wrong are better in that any controversial claim you make is likely to be torn to shreds.

Comment author: Buck 28 October 2017 12:35:03AM *  5 points [-]

I am disinclined to be sympathetic when someone's problem is that they posted so many bad arguments all at once that they're finding it hard to respond to all the objections.

Comment author: Gregory_Lewis 28 October 2017 02:03:33AM *  9 points [-]

Regarding the terrible incentive gradients mentioned by Claire above, I think discussion is more irenic if people resist, insofar as possible, to impute bad epistemic practices to certain people, and even to try and avoid identifying the individual with the view or practice you take to be mistaken, even though they in fact advocate it.

As a concrete example (far from alone, and selected not because it is 'particularly bad', but rather because it comes from a particularly virtuous discussant) the passage up-thread seems to include object level claims on the epistemic merits of a certain practice, but also implies an adverse judgement about the epistemic virtue of the person it is replying to:

As a side note, I find the way you're using social science quite frustrating. You keep claiming that social science supports many of your particular beliefs, and then other people keep digging into the evidence and pointing out the specific reason that the evidence you've presented isn't very convincing. But it takes a lot of time to rebut all of your evidence that way, much more time than it takes for you to link to another bad study. [my emphasis]

The 'you-locutions' do the work of imputing, and so invite subsequent discussion about the epistemic virtue of the person being replied to (e.g. "Give them a break, this mistake is understandable given some other factors"/ "No, this is a black mark against them as a thinker, and the other factors are not adequate excuse").

Although working out the epistemic virtue of others can be a topic with important practical applications (but see discussion by Askell and others above about 'buzz talk'), the midst of a generally acrimonious discussion on a contentious topic is not the best venue. I think a better approach is a rewording that avoids the additional implications:

I think there's a pattern of using social science data which is better avoided. Suppose one initially takes a set of studies to support P. Others suggest studies X, Y and Z (members of this set) do not support P after all. If one agrees with this, it seems better to clearly report a correction along the lines of "I took these 5 studies to support P, but I now understand 3 of these 5 do not support P", rather than offering additions to the set of studies that support P.

The former allows us to forecast how persuasive additional studies are (i.e. if all of the studies initially taken to support P do not in fact support P on further investigation, we may expect similar investigation to reveal the same about the new studies offered). Rhetorically, it may be more persuasive to sceptics of P, as it may allay worries that sympathy to P is tilting the scales in favour of reporting studies that prima facie support P.

The rewording can take longer (but I am not rewording myself, rather a better writer), but even if so I expect other benefits will outweigh it.