Comment author: rohinmshah  (EA Profile) 30 October 2017 12:45:33AM 1 point [-]

As one data point, I did not have this association with "impressions" vs. "beliefs", even though I do in fact distinguish between these two kinds of credences and often report both (usually with a long clunky explanation since I don't know of good terminology for it).

Comment author: ClaireZabel 30 October 2017 12:52:40AM 1 point [-]

I'm not sure where I picked it up, though I'm pretty sure it was somewhere in the rationalist community.

E.g. from What epistemic hygiene norms should there be?:

Explicitly separate “individual impressions” (impressions based only on evidence you've verified yourself) from “beliefs” (which include evidence from others’ impressions)

Comment author: ClaireZabel 29 October 2017 10:43:21PM 16 points [-]

Thank so much for the clear and eloquent post. I think a lot of the issues related to lack of expertise and expert bias are stronger than I think you do, and I think it's both rare and not inordinately difficult to adjust for common biases such that in certain cases a less-informed individual can often beat the expert consensus (because few enough of the experts are doing this, for now). But it was useful to read this detailed and compelling explanation of your view.

The following point seems essential, and I think underemphasized:

Modesty can lead to double-counting, or even groupthink. Suppose in the original example Beatrice does what I suggest and revise their credences to be 0.6, but Adam doesn’t. Now Charlie forms his own view (say 0.4 as well) and does the same procedure as Beatrice, so Charlie now holds a credence of 0.6 as well. The average should be lower: (0.8+0.4+0.4)/3, not (0.8+0.6+0.4)/3, but the results are distorted by using one-and-a-half helpings of Adam’s credence. With larger cases one can imagine people wrongly deferring to hold consensus around a view they should think is implausible, and in general the nigh-intractable challenge from trying to infer cases of double counting from the patterns of ‘all things considered’ evidence.

One can rectify this by distinguishing ‘credence by my lights’ versus ‘credence all things considered’. So one can say “Well, by my lights the credence of P is 0.8, but my actual credence is 0.6, once I account for the views of my epistemic peers etc.” Ironically, one’s personal ‘inside view’ of the evidence is usually the most helpful credence to publicly report (as it helps others modestly aggregate), whilst ones all things considered modest view usually for private consumption.

I rarely see any effort to distinguish between the two outside the rationalist/EA communities, which is one reason I think both over-modesty and overconfident backlash against it are common.

My experience is that most reasonable, intelligent people I know have never explicitly thought of the distinction between the two types of credence. I think many of them have an intuition that something would be lost if they stated their "all things considered" credence only, even though it feels "truer" and "more likely to be right," though they haven't formally articulated the problem. And knowing that other people rarely make this distinction, it's hard for everyone know how to update based on others' views without double-counting, as you note.

It seems like it's intuitive for people to state either their inside view, or their all-things-considered view, but not both. To me, stating "both">"inside view only">"outside view only", but I worry that calls for more modest views tend to leak nuance and end up pushing for people to publicly state "outside view only" rather than "both"

Also, I've generally heard people call the "credence by my lights" and "credence all things considered" one's "impressions" and "beliefs," respectively, which I prefer because they are less clunky. Just fyi.

(views my own, not my employer's)

Comment author: xccf 28 October 2017 02:49:53AM 1 point [-]

The best scrutinizer is someone that feels motivated to actually find the truth. This should be obvious.

How does "this should be obvious" compare to average social science reporting on the epistemic hygiene scale?

Like, this is an empirical claim we could test: give people social psych papers that have known flaws, and see whether curiosity or disagreement with the paper's conclusion predicts flaw discovery better. I don't think the result of such an experiment is obvious.

Comment author: ClaireZabel 28 October 2017 04:17:22AM 1 point [-]

Flaws aren't the only things I want to discover when I scrutinize a paper. I also want to discover truths, if they exist, among other things

Comment author: ClaireZabel 28 October 2017 02:19:14AM *  9 points [-]

The incentive gradient I was referring to goes from trying to actually figure out the truth to using arguments as weapons to win against opponents. You can totally use proxies for the truth if you have to(like an article being written by someone you've audited in the past, or someone who's made sound predictions in the past). You can totally decide not to engage with an issue because it's not worth the time.

But if you just shrug your shoulders and cite average social science reporting on a forum you care about, you are not justified in expecting good outcomes. This is the intellectual equivalent of catching the flu and then purposefully vomiting into the town water supply. People that do this are acting in a harmful manner, and they should be asked to cease and desist.

the best scrutinizer is someone who feels motivated to disprove a paper's conclusion

The best scrutinizer is someone that feels motivated to actually find the truth. This should be obvious.

For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion.

Yet EAs are mostly liberal. The 2017 Survey had 309 EAs identifying as Left, 373 as Centre-Left, 4 identifying as Right, 31 as Centre Right. My contention is that this is not about the conclusions being liberal. It's about specific studies and analyses of studies being terrible. E.g. (and I hate that I have to say this) I lean very socially liberal on most issues. Yet I claim that the article Kelly cited is not good support for anyone's beliefs. Because it is terrible, and does not track the truth. And we don't need writings like that, regardless of whose conclusions they happen to support.

Comment author: ClaireZabel 28 October 2017 02:32:31AM *  2 points [-]

[random] I find the survey numbers interesting, insofar as they suggest that EA is more left-leaning than almost any profession or discipline.

(see e.g. this and this).

Comment author: xccf 28 October 2017 01:41:12AM *  6 points [-]

I dearly hope we never become one of those parts of the internet.

Me too. However, I'm not entirely clear what incentive gradient you are referring to.

But I do see an incentive gradient which goes like this: Most people responding to threads like this do so in their spare time and run on intrinsic motivation. For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion. There's a small population motivated the opposite way, but since people find it less intrinsically motivating to hang out in groups where their viewpoint is a minority, those people gradually drift off. The end result is a forum where papers that point to liberal conclusions get torn apart, and papers that point the other way get a pass.

As far as I can tell, essentially all online discussions of politicized topics fall prey to a failure mode akin to this, so it's very much something to be aware of.

Full disclosure: I'm not much of a paper scrutinizer. And the way I've been behaving in this thread is the same way Kelly has been. For example, I linked to Bryan Caplan's blog post covering a paper on ideological imbalance in social psychology. The original paper is 53 pages long. Did I read over the entire thing, carefully checking for flaws in the methodology? No, I didn't.

I'm not even sure it would be useful for me to do that--the best scrutinizer is someone who feels motivated to disprove a paper's conclusion, and this ideological imbalance paper very much flatters my preconceptions. But the point is that Kelly got called out and I didn't.

I don't know what a good solution to this problem looks like. (Maybe LW 2.0 will find one.) But an obvious solution is to extend special charity to anyone who's an ideological minority, to try & forestall evaporative cooling effects. [Also could be a good way to fight ingroup biases etc.]

As a side note, I suspect we should re-allocate resources away from social psychology as a resolution for SJ debates, on the margin. It provides great opportunities for IQ signaling, but the flip side is the investment necessary to develop a well-justified opinion is high--I don't think social psych will end up solving the problem for the masses. I would like to see people brainstorm in a larger space of possible solutions.

Comment author: ClaireZabel 28 October 2017 02:19:14AM *  9 points [-]

The incentive gradient I was referring to goes from trying to actually figure out the truth to using arguments as weapons to win against opponents. You can totally use proxies for the truth if you have to(like an article being written by someone you've audited in the past, or someone who's made sound predictions in the past). You can totally decide not to engage with an issue because it's not worth the time.

But if you just shrug your shoulders and cite average social science reporting on a forum you care about, you are not justified in expecting good outcomes. This is the intellectual equivalent of catching the flu and then purposefully vomiting into the town water supply. People that do this are acting in a harmful manner, and they should be asked to cease and desist.

the best scrutinizer is someone who feels motivated to disprove a paper's conclusion

The best scrutinizer is someone that feels motivated to actually find the truth. This should be obvious.

For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion.

Yet EAs are mostly liberal. The 2017 Survey had 309 EAs identifying as Left, 373 as Centre-Left, 4 identifying as Right, 31 as Centre Right. My contention is that this is not about the conclusions being liberal. It's about specific studies and analyses of studies being terrible. E.g. (and I hate that I have to say this) I lean very socially liberal on most issues. Yet I claim that the article Kelly cited is not good support for anyone's beliefs. Because it is terrible, and does not track the truth. And we don't need writings like that, regardless of whose conclusions they happen to support.

Comment author: xccf 28 October 2017 12:16:57AM 3 points [-]

As a side note, I find the way you're using social science quite frustrating. You keep claiming that social science supports many of your particular beliefs, and then other people keep digging into the evidence and pointing out the specific reason that the evidence you've presented isn't very convincing. But it takes a lot of time to rebut all of your evidence that way, much more time than it takes for you to link to another bad study.

To be charitable to Kelly, in most parts of the internet, a link to popular reporting on social science research is a high quality argument. I can understand how it might be frustrating for people to tell you you need to up your paper scrutinizing game while you are busy trying to respond to an entire thread full of people expressing disagreement.

Comment author: ClaireZabel 28 October 2017 12:47:00AM 17 points [-]

To be charitable to Kelly, in most parts of the internet, a link to popular reporting on social science research is a high quality argument.

I dearly hope we never become one of those parts of the internet.

And think we should fight against every slip down that terrible incentive gradient, for example by pointing out that the bottom of that gradient is a really terribly unproductive place, and by pushing back against steps down that doomy path.

Comment author: Kelly_Witwicki 27 October 2017 09:01:47PM *  -1 points [-]

An explanation of what you mean by "turn out OK" would be helpful. For instance, do movements that err more towards social justice fare worse than those that err away from it (or than those that sit at the status quo)?

Whether that's the case for the atheism movement or the open source community is a heavy question that merits more explanation.

Actually, I would think that any overshooting you see in these communities is a reaction to how status-quo (or worse) both of those communities are. Note for instance that when women are not collaborators on a project (but not when they are), their open-source contributions are more likely to be accepted than men's when their gender is not known but despite that they're less likely to be accepted than men's when their gender is known.

Comment author: ClaireZabel 27 October 2017 09:46:00PM *  10 points [-]

Kelly, I don't think the study you cite is good or compelling evidence of the conclusion you're stating. See Scott's comments on it for the reasons why.

(edited because the original link didn't work)

Comment author: Roxanne_Heston  (EA Profile) 03 October 2017 11:34:54AM *  0 points [-]

Right, neither do I. My 25-hour estimate was how long it would take you to make one grant of ~£500,000, not a bunch of grants adding up to that amount. I assumed that if Open Phil had been distributing these funds it would have done so by giving greater amounts to far fewer recipients.

Comment author: ClaireZabel 03 October 2017 08:18:04PM 0 points [-]

Ah, k, thanks for explaining, I misinterpreted what you wrote. I agree 25 hours is in the right ballpark for that sum (though it varies a lot).

Comment author: Milan_Griffes 03 October 2017 01:22:58AM 2 points [-]

Minor thing: it'd be helpful if people who downvoted commented with their reason why.

Comment author: ClaireZabel 03 October 2017 08:14:28PM 8 points [-]

Personally, I downvoted because I guessed that the post was likely to be of interest to sufficiently few people that it felt somewhat spammy. If I imagine everyone posting with that level of selectivity I would guess the Forum would become a worse place, so it's the type of behavior I think should probably be discouraged.

I'm not very confident about that, though.

Comment author: ClaireZabel 03 October 2017 05:49:37AM 1 point [-]

An Open Phil staff member made a rough guess that it takes them 13-75 hours per grant distributed. Their average grant size is quite a bit larger, so it seems reasonable to assume it would take them about 25 hours to distribute a pot the size of EA Grants.

My experience making grants at Open Phil suggests it would take us substantially more than 25 hours to evaluate the number of grant applications you received, decide which ones to fund, and disburse the money (counting grant investigator, logistics, and communications staff time). I haven't found that time spent scales completely linearly with grant size, though it generally scales up somewhat. So while it seems about right that most grants take 13-75 hours, I don't think it's true that grants that are only a small fraction of the size of most OP grants would take an equally small fraction of that amount of time.

View more: Next