Carl_Shulman comments on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread. Show more comments above.

Comment author: Carl_Shulman 12 January 2017 07:52:49AM 17 points [-]

One bit of progress on this front is Open Phil and GiveWell starting to make public and private predictions related to grants to improve their forecasting about outcomes, and create track records around that.

There is significant room for other EA organizations to adopt this practice in their own areas (and apply it more broadly, e.g. regarding future evaluations of their strategy, etc).

I believe the incentive alignment is strongest in cases where you are talking about moving moderate to large sums of money per donor in the present, for a reasonable number of donors (e.g., a few dozen donors giving hundreds of thousands of dollars). Donors who are donating those large sums of money are selected for being less naive (just by virtue of having made that much money) and the scale of donation makes it worth their while to demand high standards. I think this is related to GiveWell having relatively high epistemic standards (though causality is hard to judge).

This is part of my thinking behind promoting donor lotteries: by increasing the effective size of donors, it lets them more carefully evaluate organizations and opportunities, providing better incentives and resistance to exploitation by things that look good on first glance but don't hold up on close and extended inspection (they can also share their findings with the broader community).

The story I want to believe, and that I think others also want to believe, is some version of a just-world story: in the long run epistemic virtue ~ success. Something like "Sure, in the short run, taking epistemic shortcuts and bending the truth leads to more growth, but in the long run it comes back to bite you." I think there's some truth to this story: epistemic virtue and long-run growth metrics probably correlate better than epistemic virtue and short-run growth metrics. But the correlation is still far from perfect.

The correlation gets better when you consider total impact and not just growth.

Comment author: Daniel_Dewey 12 January 2017 05:53:11PM 16 points [-]

Prediction-making in my Open Phil work does feel like progress to me, because I find making predictions and writing them down difficult and scary, indicating that I wasn't doing that mental work as seriously before :) I'm quite excited to see what comes of it.

Comment author: Raemon 13 January 2017 05:30:49AM 3 points [-]

Wanted to offer something stronger than an up vote in starting the prediction-making: that sounds like a great idea and want to see how it goes. :)