I have a project that I want to run by the community.

 

A while ago, Holden Karnofsky declared that the Open Philanthropy Project is dedicated to “hits-based giving”, a framework that accepts “philanthropic risk” where 90% of grants have zero impact but the number of grants that do have impact have sufficient impact to make the entire project worthwhile. This could be compared to the more traditional approach of GiveWell, where all grants made are to organizations where the expected value is relatively well known (even if the organizations may still have zero impact).

 

While I’m quite sympathetic to the classic GiveWell approach, this kind of “hits-based” investment policy sounds quite plausibly effective to me. When we’re in a world with many different projects, with only a limited amount of time to get to know them, and with way too many unresolvable unknowns, we have to try to get some hits. This is quite analogous to what I think pretty much every major for-profit venture capital firm does with their for-profit investments.

 

However, I do have some room to doubt about the “hits-based” approach. With poor selection, it could resolve to be “random giving”, which I would expect to hit at approximately the mean intervention -- even if it were in a cause area with a high top 1%, we may not be able to find that top 1%, and the mean intervention may be worse than the top global poverty intervention we already know about.

 

I also don’t really know if all major investing can be described as “hits-based”. Perhaps stories we hear about this “hits-based” strategy being successful are mere survivorship bias. I imagine many VC and non-VC investing firms do frequently research their investments significantly, perhaps more significantly than the Open Philanthropy Project does. And even if the strategy does work well for for-profit VCs, the strategy may not be easily applied to the non-profit world, where incentives are noticeably worse.

 

But fear not, for I think these questions can be answered empirically. All we would have to do is run Open Phil for long enough and try to track down, best we can, how well the grants perform compared to AMF. For example, arguably Open Phil’s commitment to cage-free corporate campaigning could qualify as a “hit” that potentially surpasses AMF (assuming pledge promises are successfully kept without significantly more spending and that the future investments in campaigning get comparable returns) and does account for 12.5% of OpenPhil’s non-GiveWell grants to date[1].

 

Given that a substantial comparison over time would still take a few years, if not decades, to fully resolve (plus the value of existential risk mitigation may never be known), we might instead want to turn to people who have already done this for a long time and see how they have done.

 

A decent reference class that came to mind was comparing some historical big foundations’ hits and misses (that tend to take more of a hits-based giving approach) with some comparable government programs that do similar sorts of projects but with a more evidence-based low-variance strategy. I think it would take some research to find the right and a large enough sample of foundations and government agencies to compare but they seem to often differ in this way, so it seems like it could be possible. For example, the Gates Foundation seems to pursue hits-based giving while the DFID does not seem to… is this characterization true? If so, which one seems to be more cost-effective on average?

 

As another example, if you took an objective criterion like "top 10 biggest foundations 1975-2000" and looked at all the biggest hits over those 25 years and divided it by all the money over those 25 years, would the cost-effectiveness justify all that spending? If it turned out to be around the same as GiveDirectly, I’d be pretty convinced by the model of “hits-based giving”, though we would have to adjust for the fact that many major foundations are non-utilitarian and don’t aim to bring about the greatest possible good.

 

And, of course, this whole idea will not be perfect. It will vary a lot in quality based on the time and effort put into it, but it would be a huge step forward from the pretty soft intuitions I have seen on this question so far. But I could see 40 hours of research making a good deal of progress on this problem and I’m surprised that GiveWell, despite committing to studying the history of philanthropy, has not produced something comprehensive like this in defending their worldview.

 

Resolving this question would be pretty action-relevant for myself and a few other people, as we may personally be more inclined to try to take big risks on big bets with our own projects, rather than relying on high-quality evidence or working to create more high-quality evidence.

 

Previously I paid $100 to commission a project that I suggested on the EA forum and that went pretty well. I think this one is important enough that I’d be willing to wager money on this too. I’d pay $1500 for the first person that answers the question to my satisfaction. Please contact me at peter@peterhurford.com prior to undertaking this so I can help guide you and to avoid duplication of work.

 

-

 

Update - 2 March 2017: See here for a more detailed elaboration of the project.

 

Update - 23 Aug 2017: It ended up being the case that the data on grants from the top ten biggest foundations is simply not available enough to make this project feasible in its current form. Most foundations do not have public digital grant records and those that do typically start after 2000.

 

-

 

[1]:  $154,008,339 total grants given, minus $95,885,518 to GiveWell top charities = $58,122,821 non-GiveWell grants. Cage-free campaigns equal $7,239,392 of granting, which is 12.5% of $58,122,821.

Comments7
Sorted by Click to highlight new comments since: Today at 11:42 PM

One starting point could be this recent report by Bridgespan: this is the forbes article about it... I found this interesting - all donations over 25 million in 2015 that are categorized as big bets: http://www.forbes.com/sites/kerryadolan/2016/11/30/big-bet-philanthropy-solving-social-problems/#61e76fd03999

And here is an older list with grants from 2000-2014

https://www.bridgespan.org/bridgespan/images/articles/making-big-bets-for-social-change/Making-Big-Bets-for-Social-Change-Pdf.pdf?ext=.pdf

one could analyse how those turned out.

I strongly support this, especially with regard to the approach described by: "As another example, if you took an objective criterion like "top 10 biggest foundations 1975-2000" and looked at all the biggest hits over those 25 years and divided it by all the money over those 25 years, would the cost-effectiveness justify all that spending?"

I think the more general, detailed approach first described is most likely to not have sufficiently meaningful data.

March 2 Update: We have a volunteer who is taking on this project. As a result, Joey and I broke down the project more to the following questions:

1.) What were the top twenty foreign aid foundations (including government agencies) from 1975 to 2000 in terms of total grant dollars given to foreign aid (e.g., DFID, USAID, Gates/GAVI)? Scoring them relative to each other, how would you score them on a 1-5 scale with 5 being most accurately described as "hits based" and 1 being most accurately described as "proven evidence-backed"? (Also, is this a useful dichotomy?) Please try to provide justification for rankings.

2a.) Looking back at the list of top twenty orgs by size, pick the top five orgs by size that are more "hits based" and the top five orgs by size that are more "evidence-backed".

2b.) From each of these orgs, look at their top 10 grants by grant size. Of these, pick two grants that are likely to be the highest impact and two grants that are likely to be of average impact (relative to the ten grants from that org). You can look at there website, wiki page, and stated granting strategies to get a sense of this. (There will be 40 grants considered total.) Briefly describe the outcomes of the grant and the grant size. Present these grants shuffled and as blinded as possible (no org name) to Joey and me so that we can independently rank them without knowing whether they came from hits based orgs or not.

2c.) Using your own research, as best as possible, try to quantify the impact of these grants.

2d.) Combining our judgments, come to an overall assessment as best as possible as to the relative success of "hits-based" and "evidence-based" orgs.

We also have a bonus question that is much lower priority but might be of potential interest down the road:

3.) Can VC firms be described as pursuing a "hits based" strategy? How much due diligence do they put into their investments before making them? How does this due diligence compare to OpenPhil? Is there anything from learning about VC strategy we can use to inform EA strategy?

-

Joey and I separately estimated how long it would take to do (1) + (2). We then averaged our estimates together and then multiplied by 1.5 to adjust for the planning fallacy. We came up with a total of 70 hours. Since this is more than we originally thought, we decided to up our pay from $1500 to $2000.

I am sorry. It appears that a GuideStar Premium account is needed. (Or the questions will need to be changed--specifically the time period of the first question.) Or maybe there is a research tool/engine that I'm not aware of.

Anyway, here is a little bit of headway:

https://docs.google.com/document/d/1eAjrPIDINvE-g7bGP8b0hIDVFid36UrEE5nYZUkZ4q4/edit?usp=sharing

Anyone, please feel free to continue. Anyone can edit the document 100%. (You will also be able to see other's work in real time, which can always be reverted back.)

Thanks! I can take it from here. :)

Congratulations! This is very exciting and I'm looking forward to hearing about future updates.

I like this idea. One danger (in both directions) with comparing to VC is that my impression is venture capital is way more focused on prestige and connections than funding charities is. In particular, if you can successfully become a prestigious, well-connected VC firm, then all of the Stanford/MIT students (for instance) will want you to fund their start-up, and picking with only minimal due diligence from among that group is likely to already be fairly profitable. [Disclaimer: I'm only tangentially connected to the VC world so this could be completely wrong, feel free to correct me.]

If this is true, what should we expect to see? We should expect that (1) VCs put in less research than OpenPhil (or similar organizations) when making investments, (2) hits-based is very successful for VC firms conditioned on having a strong established reputation. I would guess that both of these are true, though I'm unsure of the implications.