Donating To High-Risk High-Reward Charities

Cross posted on my blog.


If you wanted to make a lot of money, you’d accept the need to make high-risk high-reward business decisions, like founding a company or investing in stocks, right?

Ok, what about for charitable donations? If you wanted to do a lot of good, would you donate to charities that might not have any impact, but could have a large impact?

Many people are willing to take on large risks in business, yet almost no one donates to charity in this manner. Basic scientific research gets very little in donations despite an impressive history of results. And even when people do donate, the possibly crazy yet potentially groundbreaking research – like cold fusion or curing aging – is usually left out.

The most likely people to make risky donations are effective altruists – people who pride themselves on being both rational and philanthropic in an effort to do the most good for the most people. Yet even these “warm and calculated” effective altruists tend to favor safer charities like the Against Malaria Foundation – which can pretty reliably save one life for around $3,000 – over riskier bets like the Machine Intelligence Research Institute (MIRI) – which is working to ensure human level artificial intelligence doesn’t lead to human extinction.


Well, of course people shy away from riskier bets. Isn’t this risk aversion a simple irrationality, pervasive to all areas of life?

Actually, there are good reasons to be risk-averse in many areas of life, but charitable donations really isn’t one of them. If anything, people should be a lot riskier with their donations than with their investments.


Wait, how is risk aversion in business a good thing?

Due to the law of decreasing marginal utility. This law states that all goods decrease in value (to you) the more you have of them. While walking home from the lab last Friday after a long week of research, I passed by a pastry shop that I frequent occasionally. Feeling like I had earned a treat, I decided to get three donuts. The first one was amazing! The next one was still pretty good. I only got halfway through the third before deciding to stop eating it. From that experience, I can tell you that I’d prefer one donut with 100% certainty to 3 donuts with 33% certainty, or even 3 with 50% certainty.

The funny thing is, the law of decreasing marginal utility even applies to money – the first million dollars in your bank matters more than the next, and so on.


Ok, but if risk aversion is rational, why is it bad if people are risk-averse with their charitable donations?

Because your charitable donations aren’t primarily about you. Even though donating can feel good, the main point is furthering some cause. If you’re donating to a cause that helps a bunch of different people, each of those people has their own decrease in marginal utility (in what’s known as their “utility function”). If you save ten lives, you quite literally do ten times as much good as if you save one life. Consider saving the life of a student named Jane. Jane will be forever grateful to you, and the fact that you’ve already saved nine people before saving her won’t decrease the value in you saving her life.

After a point, charitable donations actually do face a decrease in marginal utility. This is because whatever cause they address, or method they use in addressing it, starts to actually solve the cause. With low hanging fruit gone, it’s harder to make more gains. But the amounts that would have to be donated to significantly see this effect is huge – typically much larger than what anyone who isn’t wealthy could donate.


Ok, so the risk from high-risk high-reward charities shouldn’t be as off-putting as the risk in our personal lives?

Exactly. But there’s also another reason high-risk high-reward charities make a ton of sense. This time we’re looking at the reward.

When personal business risks pay off, they typically don’t increase your personal wealth by several orders of magnitude (with the obvious exceptions of successful high-tech entrepreneurship and winning the mega-lottery).

Charities, on the other hand, can vary vastly. Even considering the most effective charities at saving lives today, each life will cost a few thousand dollars to save. A dedicated person can likely save dozens of lives through donations in her lifetime.

And that’s amazing! If you saved one person from a burning building, you’d be a hero. Donating to effective charities can allow you to be a hero dozens of times over!

But consider the scale of good you could do donating to a riskier cause.

A cure for aging would save 100,000 lives every single day. Since this field has relatively little research, it’s conceivable that donating to institutions working on curing aging could advance the field more than a day.

A single donation to a charity that focuses on making sure humans don’t go extinct almost definitelywill not be the deciding factor in the extinction of our species. But it might. And that could possibly be the difference between humans going extinct and colonizing most of the observable universe.


Ok, that all makes sense, but I really want to make sure my charitable donations make some positive difference, and the riskier ones might have zero benefit…

That’s an understandable impulse. And the best way to decrease risk here might be the same way people decrease risk in the financial markets – diversification. You might want to split up your donations and give some money to safer charities to ensure you do some good, and then give more to riskier charities that are expected to do a lot more good.

Of the riskier charities, I’ve donated to MIRI and the Future of Humanity Institute. Both are working on making sure smarter than human artificial intelligence doesn’t lead to human extinction, both have made impressive advancements in the past, and both have relatively low budgets as is.

Comments (12)

Comment author: RyanCarey 14 February 2017 05:33:15AM *  5 points [-]

For discussion of risk-aversion in altruism, also see Carl's Salary or startup? How do-gooders can gain more from risky careers.

Comment author: Daniel_Eth 14 February 2017 06:10:00AM 0 points [-]

Yeah, I agree it doesn't just apply to where to donate, but also how to get money to donate, founding non-profits, etc. Which, taken to it's logical conclusion, means maybe I should angle to run for president?

Comment author: RyanCarey 14 February 2017 07:29:17AM *  7 points [-]

Carl already explored this question too, noting that it is relatively easy to go for PM of the UK in another 2012 article.

Far more people should read Carl's old blog posts.

Comment author: Daniel_Eth 14 February 2017 07:58:23AM 0 points [-]

Thanks for the link - hopefully 80000hours is able to convince some EAs to go into politics.

Comment author: Peter_Hurford  (EA Profile) 14 February 2017 04:28:35AM *  3 points [-]

While it likely is true of some EAs, it it's a simplistic strawman to assume that those of us who favor donating to AMF (though in practice I prefer donating to research and meta-charity more) do so due to risk aversion. Saying that would require knowing, with confidence, the expected value of a donation to MIRI.

I certainly would prefer to donate to a 0.01% chance of saving 11K lives than a 100% chance of saving a life. But I don't actually know that MIRI actually represents a superior expected value bet.

(See some discussion about MIRI's chance of success here and here).

Comment author: Daniel_Eth 14 February 2017 06:03:55AM 2 points [-]

Obviously different people have different motivations for their donations. I disagree that it's a straw man, though, because I wasn't trying to misrepresent any views and I think risk aversion actually is one of the main reasons that people tend to support causes such as AMF that help people "one at a time" over causes that are larger scale but less likely to succeed. MIRI's chance of success wasn't central to my argument - if you think it has basically zero net positive then substitute in whatever cause you think actually is positive (in-vitro meat research, CRIPSR research, politics, etc). Perhaps you've already done that and think that AMF still has higher expected value, in which case I would say you're not risk averse (per se), but then I'd also think that you're in the minority.

Comment author: AGB 14 February 2017 08:28:10PM 3 points [-]

For a third perspective, I think most EAs who donate to AMF do so neither because of an EV calculation they've done themselves, nor because of risk aversion, but rather because they've largely-or-entirely outsourced their donation decision to Givewell. Givewell has also written about this in some depth, back in 2011 and probably more recently as well.


Key quote:

"This view of ours illustrates why – while we seek to ground our recommendations in relevant facts, calculations and quantifications to the extent possible – every recommendation we make incorporates many different forms of evidence and involves a strong dose of intuition. And we generally prefer to give where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good – a preference that I believe is inconsistent with the approach of giving based on explicit expected-value formulas (at least those that (a) have significant room for error (b) do not incorporate Bayesian adjustments, which are very rare in these analyses and very difficult to do both formally and reasonably)."

Comment author: Linch 17 February 2017 10:53:27AM 1 point [-]

An added reason to not take expected value estimates literally (which applies to some/many casual donors, but probably not to AGB or GiveWell) is if you believe that you are not capable of making reasonable expected value estimates under high uncertainty yourself, and you're leery of long casual chains because you've developed a defense mechanism against your values being Eulered or Dutch-Booked.

Apologies for the weird terminology, see: http://slatestarcodex.com/2014/08/10/getting-eulered/ and: https://en.wikipedia.org/wiki/Dutch_book

Comment author: Daniel_Eth 15 February 2017 02:47:59AM 1 point [-]

I think it's true that many outsource their thinking to GW, but I think there could still be risk aversion in the thought process. Many of these people have also been exposed to arguments for higher risk higher reward charities such as X-risks or funding in-vitro meat research, and I think a common thought process is "I'd prefer to go with the safer and more established causes that GW recommends." Even if they haven't explicitly done the EV calculation themselves, qualitatively similar thought processes may still occur.

Comment author: RomeoStevens 17 February 2017 08:56:20AM 2 points [-]

Donating to FHI is still extremely safe on the weirdness spectrum. They're part of Oxford. Actual risky stuff would be paying promising researchers directly in non-tax deductible ways. But this is weird enough to trip people's alarms. You get no accolades for doing this, in fact quite the opposite, you will lose status when the 'obviously crazy' thing fails. We see the same thing in VC funding, where this supposed bastion of frontier challenging risk takers mostly engages in band-wagoning.

Comment author: Daniel_Eth 17 February 2017 10:03:05AM 0 points [-]

Is there any tax-deductible way to give promising researchers money directly (or through some 3rd party that didn't take a cut)? Seems like someone could set up a 501c3 that allowed for that pretty easily.

Comment author: BenMillwood  (EA Profile) 25 February 2017 01:46:43PM *  0 points [-]

One thing which makes me more confident that object-level risk[1] is important in for-profit investing, but expect it to be less central in charitable work, is that I'm more confident that for-profit risk is priced correctly, or at least not way out of line with what it should be. It seems more plausible to me that there are low-risk high-return charitable opportunities, because people are generally worse at identifying and saturating those opportunities. (Although per GiveWell's post on Broad market efficiency I now believe this effect is much less striking than I first guessed).

[1] I'm not sure this is a correct application of "object-level", but I mean actual risk that a given investment will succeed or fail, rather than the "meta" risk that we'll fail to analyse its value correctly. I'm not super confident the distinction is meaningful.