Comment author: vipulnaik 17 January 2017 09:42:38PM 1 point [-]

"So if I could be expected to work 4380 hours over 2016-2019, earn $660K (95%: $580K to $860K) and donate $160K, that’s an expected earnings of $150.68 per hour worked. [...] I consider my entire earnings to be the altruistic value of this project."

What about taxes?

Comment author: vipulnaik 12 January 2017 06:24:38AM 13 points [-]

The post does raise some valid concerns, though I don't agree with a lot of the framing. I don't think of it in terms of lying. I do, however, see that the existing incentive structure is significantly at odds with epistemic virtue and truth-seeking. It's remarkable that many EA orgs have held themselves to reasonably high standards despite not having strong incentives to do so.

In brief:

  • EA orgs' and communities' growth metrics are centered around numbers of people and quantity of money moved. These don't correlate much with epistemic virtue.
  • (more speculative) EA orgs' donors/supporters don't demand much epistemic virtue. The orgs tend to hold themselves to higher standards than their current donors.
  • (even more speculative; not much argument offered) Even long-run growth metrics don't correlate too well with epistemic virtue.
  • Quantifying (some aspects of) quality and virtue into metrics seems to me to have the best shot at changing the incentive structure here.

The incentive structure of the majority of EA-affiliated orgs has centered around growth metrics related to number of people (new pledge signups, number of donors, number of members), and money moved (both for charity evaluators and for movement-building orgs). These are the headline numbers they highlight in their self-evaluations and reports, and these are the numbers that people giving elevator pitches about the orgs use ("GiveWell moved more than $100 million in 2015" or "GWWC has (some number of hundreds of millions) in pledged money"). Some orgs have slightly different metrics, but still essentially ones that rely on changing the minds of large numbers of people: 80,000 Hours counts Impact-Adjusted Significant Plan Changes, and many animal welfare orgs count numbers of converts to veganism (or recruits to animal rights activism) through leafleting.

These incentives don't directly align with improved epistemic virtue! In many cases, they are close to orthogonal. In some cases, they are correlated but not as much as you might think (or hope!).

I believe the incentive alignment is strongest in cases where you are talking about moving moderate to large sums of money per donor in the present, for a reasonable number of donors (e.g., a few dozen donors giving hundreds of thousands of dollars). Donors who are donating those large sums of money are selected for being less naive (just by virtue of having made that much money) and the scale of donation makes it worth their while to demand high standards. I think this is related to GiveWell having relatively high epistemic standards (though causality is hard to judge).

With that said, the organizations I am aware of in the EA community hold themselves to much higher standards than (as far I can make out) their donor and supporter base seems to demand of them. My guess is that GiveWell could have been a LOT more sloppy with their reviews and still moved pretty similar amounts of money as long as they produced reviews that pattern-matched a well-researched review. (I've personally found their review quality improved very little from 2014 to 2015 and much more from 2015 to 2016; and yet I expect that the money moved jump from 2015 to 2016 will be less, or possibly even negative). I believe (with weaker confidence) that similar stuff is true for Animal Charity Evaluators in both directions (significantly increasing or decreasing review quality won't affect donations that much). And also for Giving What We Can: the amount of pledged money doesn't correlate that well with the quality or state of their in-house research.

The story I want to believe, and that I think others also want to believe, is some version of a just-world story: in the long run epistemic virtue ~ success. Something like "Sure, in the short run, taking epistemic shortcuts and bending the truth leads to more growth, but in the long run it comes back to bite you." I think there's some truth to this story: epistemic virtue and long-run growth metrics probably correlate better than epistemic virtue and short-run growth metrics. But the correlation is still far from perfect.

My best guess is that unless we can get a better handle on epistemic virtue and quantify quality in some meaningful way, the incentive structure problem will remain.

Comment author: ClaireZabel 06 January 2017 12:45:42AM 11 points [-]

I would prefer if the title of this post was something like "My 5 favorite EA posts of 2016". When I see "best" I expect a more objective and comprehensive ranking system (and think "best" is an irritatingly nonspecific and subjective word), so I think the current wording is misleading.

Comment author: vipulnaik 06 January 2017 04:29:31PM 0 points [-]

My thoughts precisely!

Comment author: vipulnaik 03 January 2017 04:15:11PM 1 point [-]

I haven't been able to successfully log in to EAF from my phone (which is a pretty old Windows Mobile phone, so might be something unique to it). That probably increases the number of pageviews I generated for EAF, because I revisit on desktop to leave a comment :).

Comment author: vipulnaik 03 January 2017 05:37:47AM 0 points [-]

Great to hear about this, Jacob! As somebody who funds a lot of loosely similar activities in the "EA periphery" I have some thoughts and experience on the challenges and rewards of funding. Let me know if you'd like to talk about it.

You can get a list of stuff I've funded at https://contractwork.vipulnaik.com

Comment author: Julia_Wise 01 January 2017 08:29:35PM 2 points [-]

Clarification: You're using EAF to mean EA Forum, while I usually see it used to mean EA Foundation.

Comment author: vipulnaik 01 January 2017 09:45:51PM 1 point [-]

Thanks, I added the explication of the acronym at the beginning.

Comment author: Castand 31 December 2016 09:27:12PM 2 points [-]

What traffic would you estimate the facebook group or other community venues to have?

Comment author: vipulnaik 31 December 2016 09:32:29PM 4 points [-]

You can get data on the Facebook group(s) using tools like http://sociograph.io -- however, they can take a while to load all the data. A full analysis of that data would be worth another post.

5

Effective Altruism Forum web traffic from Google Analytics

Note: If you are using uBlock you may not be able to see the images, because they have "/google-analytics" in the name and uBlock is a bit ... paranoid ... about blocking file names with that substring. Please disable uBlock for effective-altruism.com or temporarily disable it to see images. As... Read More
Comment author: vipulnaik 31 December 2016 08:48:40PM 0 points [-]

Why does the post have "2017" in the title?

Comment author: vipulnaik 31 December 2016 06:48:05PM 0 points [-]

Some people in the effective altruist community have argued that small donors should accept that they will use marginal charitable dollars less efficiently than large actors such as Open Phil, for lack of time, skill, and scale to find and choose between charitable opportunities. Sometimes this is phrased as advice that small donors follow GiveWell's recommendations, while Open Phil pursues other causes and strategies such as scientific research and policy.

The argument that I have heard is a little different. It is that the entry of big players like Open Phil has made it harder to have the old level of marginal impact with one's donation.

Basically:

Marginal impact of one's donation now that Open Phil is plucking a lot of low-hanging fruit < Marginal impact of one's donation a few years ago ... (1)

Whereas the claim that you are critiquing is:

Marginal impact of one's donation < Marginal impact of Open Phil's donation ... (2)

Why does (1) matter? Some donors have fixed charity budgets, i.e., they wish to donate a certain amount every year to charity. For them, then, the challenge is just to find the best use of money, so even if marginal impacts are down across the board, it doesn't matter much because all the matters is relative impact.

For other donors and potential donors, charitable donations compete with other uses of money. Therefore, whether or not one donates to charity, and how much one donates to charity, would depend on how large the marginal impact is. If the entry of players like Open Phil has reduced the marginal impact achievable, then that's good reason to donate less.

So I feel that the argument you are attacking isn't the actually correct one to attack. Though you do address (1) a bit in the post, I think it would have made more sense to make it the main focus.

View more: Next