1

Where I'm giving and why: Will MacAskill

Summary: I donated this year for two different reasons. I donated to AMF, SCI, Deworm the World, Project Healthy Children, and the Copenhagen Consensus Center because I want to demonstrate my support for extremely cost-effective first-order charities. But I think that the highest-expected-value donation opportunities are to 'meta' organisations. I think that the Centre for Effective Altruism (CEA), Centre for the Study of Existential Risks (CSER), and setting up an Effective Altruism Fund are the three big contenders for best expected bang per buck. Even after trying to correct for obvious personal bias, I think that CEA wins out for my comparatively small donation; if I had info that the relevant position at CSER wouldn't be funded anyway, and if I had more to give (e.g. ~$70k) then I think that CSER would be better.

0. Introduction

I divided my donations into two classes.

First, because my donations partly represent the Centre for Effective Altruism as a whole, I use about half of my donations as a way of demonstrating my support for highly cost-effective organisations that lie outside the core effective altruism community. This year I donated to AMF, SCI, Deworm the World, Project Healthy Children, and the Copenhagen Consensus Center. (Because these have the purpose of showing my support, the usual arguments in favour of giving to just one charity don’t apply). These were all recommended by GWWC except the Copenhagen Consensus Center – I chose to donate to them because I think that promoting global prioritisation is one of the most important activities to be doing, and, though harder to be sure about, their influence is potentially measured in the hundreds of millions or billions of dollars. (My main worry with them is that Lomborg’s (inaccurate) reputation as a climate skeptic might taint the idea of global prioritisation.) I didn’t donate to GiveDirectly because I don’t find GiveWell’s arguments for their ‘upside’ potential to be compelling as a reason to donate on the margin. (Some discussion here, though this post is now a year old).

The second half of my donations I used in whatever way I believe will maximise expected impact, given my particular values and knowledge. I think there are three main contenders.

1. Donating unrestricted to the Centre for Effective Altruism

I think that building the effective altruism movement is in general the highest-value near term activity, for two reasons:
  1. Leverage. $1 donated to movement-building generates significantly more than $1’s worth of money and time for the highest-value causes. My view is that this rate of return is far beyond what one could get through financial investment. I’ve written about this before here.
  2. Keeping options open. Effective altruists’ donations and use of time are sensitive to the best evidence. I expect our evidence about the best uses of time and money to get much better over the next few years.  This would motivate saving and giving later  (see here for more arguments on either side) – but donating to movement-building is a way of doing that, except with a higher rate of return.
The best precise donation opportunity in this area, in my view, is to CEA, unrestricted such that it can be spent on projects other than Giving What We Can and 80,000 Hours. There are a number of new projects that are plausibly even higher-impact, on the margin, than GWWC/80k, but which don’t have a track record, and so are comparatively difficult to fundraise for. For example, we’ve recently been involved in a number of meetings with people in the UK government, and we wish to develop ideas from CEA and FHI into policy-relevant documents. We are hiring an Oxford academic to help with this (among other things).

CEA unrestricted is where I ultimately chose to give. There is obviously a great risk of bias: it seems to me to be an awful coincidence if I think that the organisation I cofounded is the most cost-effective use of money on the margin. But, even after trying to correct for these biases, CEA remained my top choice. (A partial response to the ‘coincidence’ worry is that a very small number of organisations have ever been started with the aim of maximising the good they do, where the founders’ view of ‘maximising the good’ is the same as or similar to mine (i.e. understood in effective altruist terms).)

2. Donating to the Centre for the Study of Existential Risks

From what I know, this seems like an opportunity for great leverage. Marginal donations will be used to fund program management and grantwriting, which would turn ~$70k into a significant chance of ~$1-$10mn, and launch what I think might become one of the most important research institutions in the world. They have high profile people on the board, Cambridge affiliation, and an already written previous grant proposal that very narrowly missed out on being successful (for ~$10mn). It seems to me that they just need a bit of start-up money in order to hire someone to apply for major grants.

I’ve only been thinking of them as a donation opportunity recently, and would need to know more about CSER’s funding situation before being confident in the above views. For example, if Jaan Tallinn is already paying for a program manager or grant writer, then this leverage opportunity might already have been taken. This expenditure is also pretty lumpy, and I don't expect them to get all their donations from small individual donations, so it seems to me that donating 1/50th of the cost of a program manager isn't as good as 1/50th of the value of a program manager. For those with a larger amount to give, the situation is different.

3. Setting up an Effective Altruism Fund

The idea behind this is that it’s meant to be strictly better than saving and donating later. The idea is that people who are saving in order to donate later tell some central body – the EA fund – how much they’re saving. The EA fund can then advertise the fact that it has some significant amount of money that’s waiting for a high-impact enough opportunity. This could have the following benefits:
  1. Incentivises new effective altruism start-ups, which could get seed funding from it (e.g. Effective Fundraising needed just a few thousand dollars – a use of money that I think was clearly worth it).
  2. Can be used as loans e.g. for people who want to pursue earning to give but need to retrain.
  3. Can be used as ‘emergency reserves’ for several organisations. (Insofar as risks of running out of funding should be only partially correlated between different EA organisations, it makes sense for there to be a shared pool of funding, rather than each organisation having their own reserves.  This means you need a lower total amount of reserves than if every organisation had their own).
  4. Can be used as insurance for individuals. Insofar as individuals will be more risk-averse with respect to money when thinking about their own self-interest than when thinking altruistically, we should expect people to be biased in the direction of playing it safe in their career choices. This could be a way of encouraging people to take more risks – by giving them a safety net if things turn out badly.   (Note: though this makes sense in theory, I haven’t seen examples of it in practice, perhaps because there are few paths that result in total failure if they don’t work out. It might apply for someone, for example, thinking about entrepreneurship versus consultancy and worried by the failure rate of new start-ups.)
My view is that this is better than saving and donating later (if you have donations greater than, say, $20,000). But I don’t think it’s as good, right now, as CEA or CSER donations.

GiveWell would have been on the above list of contenders if their funding needs were greater. I think they’re right to want to diversify their funding. But being the very best place to donate is a high bar, and donating to extend GW’s reserves, when they already have GoodVentures as a fall-back, seems unlikely to me to be the best use of funds. Since their first blog post asking for donations, they’ve already raised $250,000; I expect them to soon raise enough to cover next year’s expenses. (Or to figure out a way of being funded by GoodVentures without the problems). As a comparison: whereas GiveWell is looking to raise enough reserves to cover 24 months (in line with their excess assets policy; this includes future pledged donations), because of rapid growth 80,000 Hours and Giving What We Can have recently been operating on only a few months of reserves.

Comments (16)

Sort By: Old
Comment author: CarlShulman 31 December 2013 02:19:00AM 1 point [-]

"setting up an Effective Altruism Fund"

The cheap and easy first step along these lines would be for CEA to make a page on its website where people saving to donate later, or putting money in Donor Advised Funds, could register the amounts saved/invested and their intentions for the funds. This would be a very cheap and easy effort (you could even just use a Google Form) and would allow evidence of interest to accumulate.

You don't need $20,000 to do it, just a bit of staff time, and there are definitely people saving for later or using DAFs (Peter Hurford just posted about his plans along those lines, earlier in this blog series).

"Even after trying to correct for obvious personal bias, I think that CEA wins out for my comparatively small donation; if I had info that the relevant position at CSER wouldn’t be funded anyway, and if I had more to give (e.g. ~$70k) then I think that CSER would be better...This expenditure is also pretty lumpy, and I don’t expect them to get all their donations from small individual donations, so it seems to me that donating 1/50th of the cost of a program manager isn’t as good as 1/50th of the value of a program manager. For those with a larger amount to give, the situation is different."

Why not make a long odds bet with a wealthy counterparty or use high-risk derivatives to get a chance at making the large donation? In principle, economies of scale like this should always be subject to circumvention at modest cost.

Also see: http://www.indiegogo.com/

"that Lomborg’s (inaccurate) reputation as a climate skeptic might taint the idea of global prioritisation.)"

He does accept the scientific consensus and relies on IPCC figures, but he does seem to spend a really disproportionate portion of his writing and speaking on the idea of trading off climate mitigation costs against more effective interventions. It is far less common to see him pitting highly effective global public health interventions against farm subsidies, military spending, social security, rich country health care, tax cuts, or other non-climate competing expenditures.

Comment author: Matt_Wage 05 January 2014 03:50:00AM 0 points [-]

I like Carl's idea for the EA fund. I have some money in a DAF that I would register on an "EA fund website".

Comment author: Pablo_Stafforini2 17 March 2014 03:47:00AM 0 points [-]

"Why not make a long odds bet with a wealthy counterparty or use high-risk derivatives to get a chance at making the large donation? In principle, economies of scale like this should always be subject to circumvention at modest cost."

Yes, as Paul Christiano writes,

In principle some opportunities might only be accessible for big donors. $1B might go more than a thousand times farther than $1M. But if we are really only interested in expectations, then we can always just take a 1000:1 bet and turn our $1M into a 1/1000 chance of $1B. This guarantees that there are no increasing returns to money.

Comment author: Jess_Riedel 31 December 2013 06:51:00AM 0 points [-]

Will, are you saying that this fund would basically just be a registry? (As opposed to an actual central collection of money with some sort of manager.)

Do you really think people would just send money to 1st-world strangers (ii) on the promise that the recipient was training to earn to give? I have similar misgivings about (iv).

Comment author: Niel_Bowerman 02 January 2014 08:11:00PM 0 points [-]

In addition to Carl's comments on why the registery would be easier, it has the added benefit of people being able to control their own funds and thus being more willing to contribute to the 'fund'.

"Do you really think people would just send money to 1st-world strangers (ii) on the promise that the recipient was training to earn to give?" They needn't be strangers. This has already happened in the UK EA community amongst EAs who met through 80,000 Hours and supported each other financially in the early training and internship stages of their earning to give careers.

Comment author: Jess_Riedel 02 January 2014 09:06:00PM 0 points [-]

> They needn't be strangers. This has already happened in the UK EA community amongst EAs who met through 80,000 Hours and supported each other financially in the early training and internship stages of their earning to give careers.

Agreed, but if the funds are effectively restricted to people you know and can sort of trust, then the public registry loses most of its use. Just let it be known among your trusted circle that you have money that you'd be willing share for EA activities. This has the added benefit of not putting you in the awkward position of having to turn down less-trusted folks who request money.

Comment author: Niel_Bowerman 02 January 2014 09:15:00PM 0 points [-]

Yes, unless you were able to meet with people and create time to develop the neccessary trust. Also, like any grant-making foundation, I wouldn't expect people in the registry to fund all or even most of the oppertunities that came along, though the registry would lose some of its value if it appears to be unlikely to give out donations to good projects.

Comment author: Owen_Cotton-Barratt2 05 January 2014 06:37:00PM 0 points [-]

I don't know about the appropriate legal hurdles, but if you wanted to scale this, you would set it up as a loan with a reasonable interest rate rather than a gift. That way the individual needs to trust the central body which is making the loan (that it will use the money raised for good ends), rather than the central body trusting the individual. This is a much lower bar to cross.

Comment author: XXY52891 01 January 2014 03:37:00PM 0 points [-]

Why CSER and not FHI?

Comment author: CarlShulman 01 January 2014 11:06:00PM 0 points [-]

CSER is at startup stage, with a lot of valuable resources going underutilized, so it looks more leveraged.

Comment author: Jess_Riedel 02 January 2014 09:21:00PM 0 points [-]

My impression is more that FHI is at the startup stage and CSER is simply an idea people have been kicking around. Whether or not you support CSER would depend on whether or not you think it's actually going to be instantiated. Am I confused?

Comment author: Niel_Bowerman 02 January 2014 09:26:00PM 0 points [-]

I'm not sure of the exact numbers but my impression is that FHI has perhaps half a dozen full-time staff members, and CSER has one part-time person who is based in FHI and has been working on grant applications but I am unclear about the long-term financial viability of having this person working on applying for grants.

Comment author: Howtogive 02 January 2014 08:20:00PM 0 points [-]

You say the usual arguments to give to only one charity don't apply to you because you want to show your support, but isn't that one of the bad arguments listed on the page you link to? If so I presume you have a good reason why you still donate to multiple charities, but also presumably most people do not have that reason (otherwise the argument would not be a bad argument). So I wondered why you have this reason and whether as a result I and others might have that reason, and if they do then you ought to amend the article on giving to one charity so that people do not give at less than maximum efficiency

Comment author: Niel_Bowerman 02 January 2014 09:00:00PM 0 points [-]

I would imagine Will donates to multiple charities because the impact of his donations come primarily through their ability to inspire others to donate. Because of Will's profile as a columnist and public intellectual, he often meets with potential donors who favour one of his recommendations over the others, and Will is able to say that he also donates to them, which may increase the likelihood of donations via the "actions speak louder than words" heuristic.

This would apply to others if they believe {{the impact of donations they can inspire by donating to multiple charities} - {the impact of donations they can inspire by donating to their top recommended charity}} > {{the impact of the donation to their top recommended charity} - {the impact of instead donating to multiple charities}}. Presumably Will believes that this inequality is true for his case. The exact quantities of donations that you need to be able to inspire for this to be true depend on your assessment of the relative efficiencies of the different charities that you are considering donating to. Of course in reality these quantities are virtually impossible to calculate and so there is always going to be signficant uncertainty associated with this decision.

It is also possible that Will is using some variant of the argument used by Julia Wise: "I wouldn’t want the whole effective altruist community to donate to only one place. So I’m okay with dividing things up a bit." /ea/5l/where_im_giving_and_why_julia_wise/

It is also interesting to note that many of the GiveWell staff have chosen to donate to only one of their recommendations, presumably because they agree that they can have more impact that way. http://blog.givewell.org/2013/12/12/staff-members-personal-donations/

Comment author: Jess_Riedel 02 January 2014 09:18:00PM 0 points [-]

I think the claim, which I do not necessarily support, would be this: Many people give to multiple orgs as a way of selfishly benefiting themselves (by looking good and affiliating with many good causes), whereas a "good" EAer might spread their donation to multiple orgs as a way to (a) persuade the rest of the world to accomplish more good or (b) coordinate better with other EAs, a la the argument you link with Julia. (Whether or not there's a morally important distinction between the laymen and the EAer as things actually take place in the real world is a bit dubious. EA arguments might just be a way to show off how well you can abstractly justify your actions.)

Comment author: Pablo_Stafforini2 13 March 2014 11:09:00PM 0 points [-]

This expenditure is also pretty lumpy, and I don’t expect them to get all their donations from small individual donations, so it seems to me that donating 1/50th of the cost of a program manager isn’t as good as 1/50th of the value of a program manager.

Carl Shulman explores the implications of this claim in this post.