Comment author: Peter_Hurford  (EA Profile) 14 January 2018 01:37:32AM *  3 points [-]

Great to see such thorough analysis!

Do you think the time costs will go down next year? That strikes me as the most significant drawback to this project. "500 hours or more of people’s time" for ~$48k matched is ~$96/hr, which is good, but appears to be on the lower end of fundraising ROI that I've tracked.

Still, this project seems to at least be competitive with Project for Awesome. I also suspect the value of bringing EAs together to work on a clear, common goal is an underrated form of impact through building community bonds and cohesion.

Comment author: Peter_Hurford  (EA Profile) 13 January 2018 07:15:00PM 1 point [-]

What is SoGive? It looks like an online platform where you look up the impact per dollar of different charities?

Comment author: Peter_Hurford  (EA Profile) 13 January 2018 12:47:49AM 3 points [-]

Far be it for me to rain on the parade of someone who wants to do good with their windfall -- I do admire that -- but I downvoted this post because it does not have any direct EA relevance.

Comment author: MichaelPlant 12 January 2018 12:11:13AM *  6 points [-]

I worry you've missed the most important part of the analysis. If we think what it means for a "new cause to be accepted by the effective altruism movement" that would proably be either:

  1. It becomes a cause area touted by EA organisations like Give Well, CEA, or GWWC. In practice, this involves convincing the leadership of those organisations. If you want to get a new cause in via this route, that's end goal you need to achieve; writing good arguments is a means to that end.

  2. you convince individuals EA to change what they do. To a large extent, this also depends on convincing EA-org leadership, because that's who people look to for confirmation a new cause has been vetted. This isn't necessarily stupid on the part of individual EAs to defer to expert judgement: they might think "Oh, well if so and so aren't convinced about X, there's probably a reason for it".

This seems as good as time as any to re-plug the stuff I've done. I think these mostly meet your criteria, but fail in some key ways.

I first posted about mental health and happiness 18 months ago and explained why poverty is less effective than most will think and mental health more effective. I think I was, at the time, lacking a particular charity recommendation though (I now think Basic Needs and Strong Minds look like reasonable picks); I agree it's important new cause suggestions have 'shovel ready' project.

I argued you, whoever you are, probably don't want to donate the Against Malaria Foundation. I explain it's probably a mistake for EAs to focus too much on 'saving lives' at the expense of either 'improving lives' or 'saving humanity'.

Back in August I explain why drug policy reform should be taken seriously as new cause. I agree that lacks a shovel ready project too, but, if anything, I think there was too much depth and rigour there. I'm still waiting for anyone to tell me where my EV calcs have gone wrong and drug policy reform wouldn't be more cost-effective than anything in GiveWell's repertoire.

Comment author: Peter_Hurford  (EA Profile) 12 January 2018 04:05:09PM 4 points [-]

I'm still waiting for anyone to tell me where my EV calcs have gone wrong and drug policy reform wouldn't be more cost-effective than anything in GiveWell's repertoire.

One thing I'd note here is that the rigor of GiveWell analysis versus your EV calcs is very different. There are other EV calcs out there with similar rigor that promise significantly higher $/good stuff, such as most stuff in the far future cause-space.

I argued you, whoever you are, probably don't want to donate the Against Malaria Foundation. I explain it's probably a mistake for EAs to focus too much on 'saving lives' at the expense of either 'improving lives' or 'saving humanity'.

I'd also note that GiveWell replied to your argument here: https://blog.givewell.org/2016/12/12/amf-population-ethics/

Comment author: Peter_Hurford  (EA Profile) 27 December 2017 04:36:28PM 7 points [-]

He ended up donating $5M to GiveDirectly. People are giving him grief in the announcement though... maybe we should give him some love?

Comment author: Gregory_Lewis 19 December 2017 12:38:18AM 2 points [-]

[Note: I work on existential risk reduction]

Although I laud posts like the OP, I'm not sure I understand this approach to uncertainty.

I think a lot turns on what you mean by the AI cause area being "Plausibly better" than global poverty or animal welfare on EV. The Gretchenfrage seems to be this conditional forecast: "If I spent (lets say) 6 months looking at the AI cause area, would I expect to identify better uses of marginal funding in this cause area than those I find in animal welfare and global poverty?"

If the answer is "plausibly so, but probably not" (either due to a lower 'prima facie' central estimate, or after pricing in regression to the mean etc.), then I understand the work uncertainty is doing here (modulo the usual points about VoI): one can't carefully look at everything, and one has to make some judgments on what cause areas look most promising to investigate on current margins.

Yet if the answer is "Probably, yes", then offering these recommendations simpliciter (i.e. "EA should fully fund this") seems premature to me. The evaluation is valuable, but should be presented with caveats like, "Conditional on thinking global poverty is the best cause area, fund X; conditional on thinking animal welfare is the best cause area, fund Y (but, FWIW, I believe AI is the best cause area, but I don't know what to fund within it)." It would also lean against making ones own donations to X, Y etc., rather than spending time thinking about it/following the recommendations of someone one trusts to make good picks in the AI cause area.

Comment author: Peter_Hurford  (EA Profile) 19 December 2017 02:10:49AM 3 points [-]

If the answer is "plausibly so, but probably not" (either due to a lower 'prima facie' central estimate, or after pricing in regression to the mean etc.)

This is what captures my views best right now.

Comment author: Buck 18 December 2017 09:31:14PM 1 point [-]

I don't understand how that logic leads to thinking it's a good idea to donate to the causes you're thinking of donating to. Donating to a cause area because you can identify good projects within it seems like the streetlight effect.

If you think that AI stuff is plausibly better, shouldn't you either want to learn more about it or enter a donor lottery so that it's more cost-effective for you to learn about it?

Comment author: Peter_Hurford  (EA Profile) 18 December 2017 10:01:34PM 3 points [-]

My excuses in order of importance:

1.) While I do think AI as a cause area could be plausibly better than global poverty or animal welfare, I don't think it's so plausibly better that the expected value given my uncertainty dwarfs my current recommendations.

2a.) I think I'm basically okay with the streetlight effect. I think there's a lot of benefit in donating now to support groups that might not be able to expand at all without my donation, which is what the criteria I outlined here accomplish. Given the entire EA community collaborating as a whole, I think there's less need for me to focus tons of time on making sure my donations are as cost-effective as possible, and more just a need to clear a bar of being "better than average". I think my recommendations here accomplish that.

2b.) Insofar as my reasoning in (2a) is some "streetlight effect" bias, I think you could accuse nearly anyone of this, since very few have thoroughly explored every cause area and no one could fully rule out being wrong about a cause area.

3.) There is still more I could donate later. This money is being saved mainly as a hedge to large financial uncertainty in my immediate future, but could also be used as savings to donate later when I learn more.

Comment author: Carl_Shulman 14 December 2017 03:05:09AM *  16 points [-]

organizations working outside these areas, such as those working on existential risk and far future. My impression, however, is that OpenPhil has done a good job filling up the funding gaps in this area and that there are very few organizations that would meet the criteria I’m using for these recommendations.

[Disclaimers: speaking only for myself, although I do some work for Open Phil.]

I think that that many EAs are overestimating the degree to which this funding changes the marginal returns of individual donations, for a few reasons:

  • In a number of these cases the Open Philanthropy Project grants discuss intentions to take up a percentage of the grantee's budget, and a preference not to exceed half of it; desire to avoid single-donor funding issues creates opportunities for small donors, as I discussed in this post
  • If a large donor limits itself to half of the grantee's budget, then not only is there 'room for more funding' left for other donors, but it also implicitly acts as a delayed counterfactual 1:1 matching grant, as each small donor dollar allows for another large donor dollar (less the opportunity cost of Open Philanthropy's 'last dollar' but insofar as one isn't just topping up Open Philanthropy's reserves then presumably one aims to do better than that), which could largely offset diminishing returns for the marginal donor
  • Where 'room for more funding' suggests a steep cliff of diminishing returns, in reality diminishing returns are normally much smoother, as additional funds enable reserves, marginal expenditures, openness to and pursuit of additional expansion, etc; see the linked articles by Max Dalton and Owen Cotton-Barratt
  • Concretely, I think small donors could 'top up' many of the AI grants in the Open Philanthropy grant database and get marginal cost effectiveness within a factor of 2-4 of the average cost-effectiveness of the dollars in the relevant grant
  • In cases where the topping up would work better with larger amounts (e.g. $100,000 or $500,000) because of transaction costs (e.g. working with academic labs, or asking for advice on how to do it), small donors can make use of a donor lottery to convert their donation into a 1/n chance of a donation n times as great for which the transaction costs are manageable

In my view the larger shift induced by Open Philanthropy is that the returns to using one's labor, knowledge, and other resources to create opportunities that it will find competitive have gone up (since they are more likely to be able to grow later if successful). That is a boost for several of the organizations you mention, but can also apply to larger organizations whose activity tends to produce those opportunities through other channels than being a new organization (e.g. by building pipelines for new scientists or activists, research that better prioritizes options, demonstrations that technical projects can make progress).

So I don't think that the arguments in the post are sufficient to establish this:

while I think some organizations may be more impactful per dollar overall, the marginal donation is not as useful as they are highly likely to have been able to fundraise it already with much less effort and there is less at risk (e.g., whether a program happens at all versus whether it is scaled up further).

I agree that CSH looks attractive for a donor who would otherwise give to AMF, that WASR and SI make sense for a donor who might otherwise give to The Humane League (as demonstrated by, e.g. Lewis' EA Funds grants), and that providing access to donation methods for Canadian donors could pay for itself for those donors (with some caveats about distributional details, and due diligence).

However, I don't think that increased Open Philanthropy funding provides adequate reason to dismiss the cause area of existential risk reduction for marginal funds (and in fact my own view is that the most attractive marginal opportunities lie in that area, directly or indirectly).

Comment author: Peter_Hurford  (EA Profile) 18 December 2017 05:21:22PM 1 point [-]

Thanks Carl, it's good to know that there are RFMF opportunities in topping up AI grants.

My reasoning for not donating to AI projects right now is based much less on a RFMF argument and more on not knowing enough about the space. I think I know enough about opportunities in global poverty, animal welfare, and EA community building to recommend projects there with confidence, but not for AI. I expect it would take me a good deal of time to develop the relevant expertise in AI to consider it properly. I have thought about working to develop that expertise, but so far I have not prioritized doing so.

Comment author: Katja_Grace 13 December 2017 08:51:41PM 5 points [-]

Do you have quantitative views on the effectiveness of donating these organizations, that could be compared to other actions? (Or could you point me to any of the links go to something like that?) Sorry if I missed them.

Comment author: Peter_Hurford  (EA Profile) 18 December 2017 05:17:59PM 3 points [-]

I focused more on identifying organizations that met the three criteria I outlined and then vetting them individually. Because I was just looking for organizations I felt confident in being "good enough to be considered above average", I did not take the time to develop quantitative views for them yet. I'm also not sure if such views would be useful.

For Charity Science Health, I'd rely on "What is the expected value of creating a GiveWell top charity?". While published in Dec 2016, I've revisited the underlying numbers in May 2017 and Dec 2017 and found them to still be roughly the same. Notably this estimate is for value of time spent on the project rather than value of marginal funding, but I think the two would be roughly equivalent.

For the Sentience Institute or the Wild-Animal Suffering Research Institute, I have a rough guess as to the value of cause prioritization efforts, generally speaking and I think these organizations would fall under that. Again, this estimate is looking at the value of time spent rather than value of marginal funding, but that shouldn't really matter.

For Rethink Charity, I don't have any quantitative estimates at this time. I tried making one for the Local Effective Altruism Network (LEAN) last year, but was held back by not having any quantitative information about local groups. LEAN has put a lot of time into improving this quantitative situation this year, publishing one report and aiming to publish more. This should make constructing a quantitative estimate possible.

Comment author: Peter_Hurford  (EA Profile) 16 December 2017 07:40:07PM 0 points [-]

Is there a recap of what happened in last year's lottery?

View more: Next