Gregory_Lewis comments on Why & How to Make Progress on Diversity & Inclusion in EA - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (229)

You are viewing a single comment's thread. Show more comments above.

Comment author: Gregory_Lewis 28 October 2017 02:03:33AM *  9 points [-]

Regarding the terrible incentive gradients mentioned by Claire above, I think discussion is more irenic if people resist, insofar as possible, to impute bad epistemic practices to certain people, and even to try and avoid identifying the individual with the view or practice you take to be mistaken, even though they in fact advocate it.

As a concrete example (far from alone, and selected not because it is 'particularly bad', but rather because it comes from a particularly virtuous discussant) the passage up-thread seems to include object level claims on the epistemic merits of a certain practice, but also implies an adverse judgement about the epistemic virtue of the person it is replying to:

As a side note, I find the way you're using social science quite frustrating. You keep claiming that social science supports many of your particular beliefs, and then other people keep digging into the evidence and pointing out the specific reason that the evidence you've presented isn't very convincing. But it takes a lot of time to rebut all of your evidence that way, much more time than it takes for you to link to another bad study. [my emphasis]

The 'you-locutions' do the work of imputing, and so invite subsequent discussion about the epistemic virtue of the person being replied to (e.g. "Give them a break, this mistake is understandable given some other factors"/ "No, this is a black mark against them as a thinker, and the other factors are not adequate excuse").

Although working out the epistemic virtue of others can be a topic with important practical applications (but see discussion by Askell and others above about 'buzz talk'), the midst of a generally acrimonious discussion on a contentious topic is not the best venue. I think a better approach is a rewording that avoids the additional implications:

I think there's a pattern of using social science data which is better avoided. Suppose one initially takes a set of studies to support P. Others suggest studies X, Y and Z (members of this set) do not support P after all. If one agrees with this, it seems better to clearly report a correction along the lines of "I took these 5 studies to support P, but I now understand 3 of these 5 do not support P", rather than offering additions to the set of studies that support P.

The former allows us to forecast how persuasive additional studies are (i.e. if all of the studies initially taken to support P do not in fact support P on further investigation, we may expect similar investigation to reveal the same about the new studies offered). Rhetorically, it may be more persuasive to sceptics of P, as it may allay worries that sympathy to P is tilting the scales in favour of reporting studies that prima facie support P.

The rewording can take longer (but I am not rewording myself, rather a better writer), but even if so I expect other benefits will outweigh it.