23

ClaireZabel comments on Why & How to Make Progress on Diversity & Inclusion in EA - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (229)

You are viewing a single comment's thread. Show more comments above.

Comment author: ClaireZabel 28 October 2017 12:47:00AM 17 points [-]

To be charitable to Kelly, in most parts of the internet, a link to popular reporting on social science research is a high quality argument.

I dearly hope we never become one of those parts of the internet.

And think we should fight against every slip down that terrible incentive gradient, for example by pointing out that the bottom of that gradient is a really terribly unproductive place, and by pushing back against steps down that doomy path.

Comment author: xccf 28 October 2017 01:41:12AM *  6 points [-]

I dearly hope we never become one of those parts of the internet.

Me too. However, I'm not entirely clear what incentive gradient you are referring to.

But I do see an incentive gradient which goes like this: Most people responding to threads like this do so in their spare time and run on intrinsic motivation. For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion. There's a small population motivated the opposite way, but since people find it less intrinsically motivating to hang out in groups where their viewpoint is a minority, those people gradually drift off. The end result is a forum where papers that point to liberal conclusions get torn apart, and papers that point the other way get a pass.

As far as I can tell, essentially all online discussions of politicized topics fall prey to a failure mode akin to this, so it's very much something to be aware of.

Full disclosure: I'm not much of a paper scrutinizer. And the way I've been behaving in this thread is the same way Kelly has been. For example, I linked to Bryan Caplan's blog post covering a paper on ideological imbalance in social psychology. The original paper is 53 pages long. Did I read over the entire thing, carefully checking for flaws in the methodology? No, I didn't.

I'm not even sure it would be useful for me to do that--the best scrutinizer is someone who feels motivated to disprove a paper's conclusion, and this ideological imbalance paper very much flatters my preconceptions. But the point is that Kelly got called out and I didn't.

I don't know what a good solution to this problem looks like. (Maybe LW 2.0 will find one.) But an obvious solution is to extend special charity to anyone who's an ideological minority, to try & forestall evaporative cooling effects. [Also could be a good way to fight ingroup biases etc.]

As a side note, I suspect we should re-allocate resources away from social psychology as a resolution for SJ debates, on the margin. It provides great opportunities for IQ signaling, but the flip side is the investment necessary to develop a well-justified opinion is high--I don't think social psych will end up solving the problem for the masses. I would like to see people brainstorm in a larger space of possible solutions.

Comment author: ClaireZabel 28 October 2017 02:19:14AM *  9 points [-]

The incentive gradient I was referring to goes from trying to actually figure out the truth to using arguments as weapons to win against opponents. You can totally use proxies for the truth if you have to(like an article being written by someone you've audited in the past, or someone who's made sound predictions in the past). You can totally decide not to engage with an issue because it's not worth the time.

But if you just shrug your shoulders and cite average social science reporting on a forum you care about, you are not justified in expecting good outcomes. This is the intellectual equivalent of catching the flu and then purposefully vomiting into the town water supply. People that do this are acting in a harmful manner, and they should be asked to cease and desist.

the best scrutinizer is someone who feels motivated to disprove a paper's conclusion

The best scrutinizer is someone that feels motivated to actually find the truth. This should be obvious.

For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion.

Yet EAs are mostly liberal. The 2017 Survey had 309 EAs identifying as Left, 373 as Centre-Left, 4 identifying as Right, 31 as Centre Right. My contention is that this is not about the conclusions being liberal. It's about specific studies and analyses of studies being terrible. E.g. (and I hate that I have to say this) I lean very socially liberal on most issues. Yet I claim that the article Kelly cited is not good support for anyone's beliefs. Because it is terrible, and does not track the truth. And we don't need writings like that, regardless of whose conclusions they happen to support.

Comment author: ClaireZabel 28 October 2017 02:32:31AM *  2 points [-]

[random] I find the survey numbers interesting, insofar as they suggest that EA is more left-leaning than almost any profession or discipline.

(see e.g. this and this).

Comment author: xccf 28 October 2017 02:49:53AM 1 point [-]

The best scrutinizer is someone that feels motivated to actually find the truth. This should be obvious.

How does "this should be obvious" compare to average social science reporting on the epistemic hygiene scale?

Like, this is an empirical claim we could test: give people social psych papers that have known flaws, and see whether curiosity or disagreement with the paper's conclusion predicts flaw discovery better. I don't think the result of such an experiment is obvious.

Comment author: ClaireZabel 28 October 2017 04:17:22AM 1 point [-]

Flaws aren't the only things I want to discover when I scrutinize a paper. I also want to discover truths, if they exist, among other things

Comment author: casebash 28 October 2017 08:34:22AM 2 points [-]

I actually tend to observe the other effect in most intellectual spaces. Any liberal supporting result will get a free pass and be repeated over and over again, while any conservative leaning claim will be torn to shreds. Of course, you'll see the opposite if you hang around the 50% of people who voted Trump, but not many of them are in the EA community.

Comment author: xccf 29 October 2017 12:00:44AM 0 points [-]

Do you know of any spaces that don't have the problem one way or the other?

Comment author: casebash 29 October 2017 03:37:27AM *  2 points [-]

I would say that EA/Less Wrong are better in that any controversial claim you make is likely to be torn to shreds.