Carl_Shulman comments on My Cause Selection: Michael Dickens - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread.

Comment author: Carl_Shulman 17 September 2015 02:30:41AM *  9 points [-]

It's good that you are sharing the research effort you put into this so that others can critique it, use/reference it, and build on it.

I have assorted comments below with quotes they are responding to.

But after a few conversations with Carl Shulman, Pablo Stafforini, Buck Shlegeris and others, I started more seriously considering the fact that almost all value lives in the far future, and the best interventions are probably those that focus on it.

Some context for my view: while I think there is a strong case for this in terms of additive total utilitarianism, I think the dominance is far weaker when one takes value pluralism and normative uncertainty into account. GiveWell staff have sometimes talked about whether decisions would be recommended if one valued the entire future of civilization at a 'mere' 5 or 10 times the absolute value of a century of the world as it is today. Value pluralism is one reason to apply such a heuristic.

Nick Bostrom has argued that existential risks are substantially worse than other GCRs. Nick Beckstead disagrees, and I find Beckstead’s case persuasive.

The argument there being that for most risks GCR versions are much more likely than direct existential risk versions, and GCRs have some chance of knock-on existential harms. Note that AI risk was excepted there, and has been noted as unusual in having a closer link between GCR and existential risk than others.

organizations that create new EAs may largely be valuable insofar as they create new donations to GCR charities

New staff members and entrepreneurs are very important in many cases. E.g. the EA movement has supplied a lot of GiveWell/OpenPhil staff, and founders for things like Charity Science and ACE which you mention.

Nate has discussed the fact that AI researchers appear more concerned about safety than they used to be, although it is unclear whether MIRI has had any causal role in bringing this about.

Some of this is definitely the recent surge of progress in AI, e.g. former AAAI president Eric Horvitz mentions that this was important for him and others.

For MIRI's causal influence some key elements I would highlight are:

  • The Singularity Summit playing a key causal role in getting Max Tegmark interested and the FLI created.
  • Bringing the issue to Stuart Russell's attention, resulting in Stuart's activity on the issue, including discussion in the most popular AI textbook, his involvement with the FLI grant program, etc.
  • Contributing substantially to Nick Bostrom's publication of Superintelligence, which played a key role in getting Elon Musk involved (and thus funding the FLI grant program), and eliciting favorable reviews from various others (e.g. Stephen Hawking, Bill Gates, etc.
  • The technical agenda helping to demonstrate some approaches that could work.
  • Drawing attention to the issue by a number of the academic researchers who have taken FLI grants, and some of OpenPhil's advisors.
  • Causing OpenPhil to be quite familiar with the issues, and ultimately to enter the area after seeing the results of the FLI conference, getting a sense of expert opinion, etc, as discussed on their website.

It sounds like Open Phil gave FLI exactly as much money as it believed it needed to fund the most promising research proposals.

ETA: OpenPhilanthropy has now just put up a detailed summary of the reasoning behind the FLI grant which may be helpful. They also talk about why they have raised their priority for work on AI in this post.

This is an issue that will recur on any area where OpenPhil/GiveWell is active (which will shortly include factory farming with the new hire and grant program). Here are two of my posts discussing the issues, (the first has important comments from Holden Karnofsky about their efforts to manage 'fungibility problems)'.

One quote from a GiveWell piece:

If you have access to other giving opportunities that you understand well, have a great deal of context on and have high confidence in — whether these consist of supporting an established organization or helping a newer one get off the ground — it may make more sense to take advantage of your unusual position and "fund what others won't," since GiveWell's research is available to (and influences) large numbers of people.

Also, you likely won't have zero effect, but would likely shift the budget constraint, so you could think of your donation as expanding all of Good Ventures' grants roughly in proportion to their size, which will be diversified and heavy on GiveDirectly. Or at least you could do that if they all had similar diminishing returns curves. If some have flatter curves (perhaps GiveDirectly) in Good Ventures' calculus then marginal funds would go disproportionately to those.

But if Open Phil does produce recommendations for small donors, it’s likely that one or some of these recommendations will represent better giving opportunities than any existing GCR charities.

That's a surprising claim. Probably it would recommend an existing charity. Maybe what you mean is that your expected value for any given GCR charity given what you know now is less than your expectation would be for the charities OpenPhil will recommend, given knowledge of those recommendations?

Or maybe you mean that OpenPhil's recommendations are likely to be charities that exist but that you currently don't know of?

My comment was too long to fit in the 1000 word limit, so the remainder is below.

Comment author: Carl_Shulman 17 September 2015 02:30:59AM *  9 points [-]

My comment was too long, so here's the rest:

it’s plausible that the factory farming grant will do perhaps two or three orders of magnitude more good per dollar than the GiveDirectly grant and should be receiving a lot more money

Is this something like unweighted QALYs per dollar? If you are analyzing in terms of long-run effects on the animal population, as elsewhere in the piece, those QALYs are a red herring. E.g. a tiny increase in economic activity very faintly expediting economic growth will overwhelm the direct QALYs involved with future populations. From the long-run point of view things like the changes in economic output, human populations, carbon emissions, human attitudes about other animals, and such would be the relevant metrics and don't scale with QALYs (this is made blatantly clear if one consider things like flies and ants). From the tiny-animal focus (with no accounting for differences in nervous system scale), the large farm animals will be neglible compared to various effects on tiny wild animals. If one considers neural processes within and across animals, then the numbers will be far less extreme.

Now, as I said at the start of this comment, normative pluralism and such would suggest not allowing complete dominance of long-run QALYs over current ones, but comparisons in terms of QALYs here don't track the purported long-run impacts, and if one focused only on unweighted animal QALYs without worrying about long-run consequences it would lead one away from farm animals towards wild animals.

Good Ventures currently pays for GiveWell’s operating expenses,

Not true. Previously GiveWell had capped Good Ventures contributions at 20% of GiveWell's budget. Recently they changed it to 20% for GiveWell's top charities work, and 50% for the Open Philanthropy Project (reasoning that Good Ventures is the main customer of the latter at this time, so it is reasonable for it to bear a larger share).

I don’t know how much influence Good Ventures has over GiveWell’s activities. I’m considerably less confident in Good Ventures’ cause prioritization skills than GiveWell’s, so I prefer that GiveWell has primary control over Open Phil’s cause selection, although I don’t know how much this matters. I wish GiveWell were more transparent about this.

Well nothing is going to force Good Ventures to hand over billions of dollars if it disagrees with the OpenPhil recommendations (and last year there was some disagreement between GW and GV about allocations to the different global poverty charities). But this does seem like a serious consideration to support outside donation to OpenPhil, and I think you may be underrating this donation option.

If ACE discovers that popular interventions are much less effective than previous evidence showed, to the point where GW top charities look more effective and more donors start giving to GW charities instead, the maximum impact here would be the impact had by marginal donations to GW top charities.

You only consider the case where it finds that all the current popular animal interventions are very poor. If many or most but not all are, then it could support productive reallocation from the ones that don't work as well to the ones that work better, potentially multiplying effectiveness severalfold. That's in fact the usual justification given by people in the animal charity community for doing this kind of research, but doesn't appear at all here. So I think the whole discussion of #3 has gone awry. Also the 'several orders of magnitude' claim appears again here, and the issues with QALYs vs metrics that better track long-run changes (e.g. attitude changes, population changes, legal changes) recur.

Charity Science has successfully raised money for GiveWell top charities (it claims to have raised $9 for every $1 spent)

Although note that that is valuing staff time at below minimum wage. If you valued it at closer to opportunity cost (or salaries at other orgs) the ratio would be far lower. I still think Charity Science is promising and deserving of support because of the knowledge it has produced, and I suspect its fundraising ratios will improve, but at the moment the ratio of EA resources put in to fundraising success is still on the lower end. See this discussion on the EA facebook group.

I decided to keep all the weights relatively close together because I do not have strong confidence about how much good each of these categories do. I might be able to make an inside-view argument that, say, MIRI is 1000x more effective than anything else on this list, but from the outside view, I shouldn’t let such an argument carry too much weight.

Shouldn't the same caveat apply to your suggestions earlier in the post about the future being 1000+ times more important than present beings?

Comment author: MichaelDickens  (EA Profile) 17 September 2015 03:47:12AM *  0 points [-]

I will make a few edits to the document based on your suggestions, thanks.

I have a few more points worth discussing that I didn't want to put into the doc, so I'll comment here:

Some context for my view: while I think there is a strong case for this in terms of additive total utilitarianism, I think the dominance is far weaker when one takes value pluralism and normative uncertainty into account.

Why does value pluralism justify reducing the importance of the far future? It seems unreasonable to me to discount the far future and I find it very implausible that beings in the far future don't have moral value.

On the flow through effects of global poverty over animal charities: Flow through effects probably do outweigh short-term effects, which means economic growth, etc. may be more impactful than preventing factory farming. But flow-through effects are hard to predict. I meant that effective factory farming interventions probably have much better demonstrable short-term effects than human-focused interventions. Actions that affect wild animals probably have much bigger effects, but again we run into the problem of not knowing whether our actions are net positive or negative. I'd certainly love to see some robust evidence that doing X will have a huge positive effect on wild animals so I can seriously consider supporting X.

Shouldn't the same apply to your claims earlier in the post about the future being 1000+ times more important than present beings, and ACE recommendations being 100-1000+ times better than GiveWell Classic recommendations?

I didn't say ACE recommendations are two to three orders of magnitude better than GiveWell Classic, I said Open Phil's factory farming grant is plausibly two to three orders of magnitude better than GV's grant to GiveDirectly. There are three distinctions here. First, I'm fairly uncertain about this. Second, I expect Open Phil's grant to be based on more robust evidence than ACE-recommended charities, so I can feel more confident about its impact. Third, GD has similar strength of evidence to AMF and is probably about an order of magnitude less impactful. So the difference between a factory farming grant and AMF may be more like one or two orders of magnitude.

I weighted ACE recs as 2x as impactful as GiveWell recs; Open Phil hasn't produced anything on factory farming but my best guess is its results will have an expected value of maybe 2-5x that of current ACE recs (although I expect that the EV of ACE recs will get better if ACE gets substantially more funding), largely because a lot of ACE top charities' activities are probably useless--all their activities look reasonable, but the evidence supporting them is pretty weak so it's reasonable to expect that some will turn out not to be effective.

(EDIT: I'd also add that even if I'm fairly confident about a 100-1000x effect size difference from inside an argument, when weighting donations I should take the outside view and not let these big effect sizes carry too much weight.)

After further consideration I'm thinking I rated GiveWell recs too highly; their weighting should be more like 0.05 instead of 0.1. Most of REG-raised money for GiveWell top charities went to AMF, although this might shift more toward GiveDirectly in the future in which case I should give GW recs a lower weight. I would probably rate Open Phil factory farming grants at maybe 0.3-0.5, which is an order of magnitude higher than for GiveWell top charities.

When I change the GW rec weighting from 0.1 to 0.05, the weighted donations drop by about $0.1 per $1 to REG. That's enough to make REG look a little weaker, although not enough to make me want to give to an object-level charity instead.

EDIT 2: Actually I'm not sure I should downweight GW recs from 0.1 to 0.05 because I don't know that I have strong enough outside-the-argument confidence that MIRI is 20x better than AMF in expectation. This sort of thing is really hard to put explicit numbers on since my brain can't really tell the difference between MIRI being 10x better and 100x better in expectation. My subjective perception of the probabilities of MIRI being 10x better versus 100x better feel about the same.

Comment author: Carl_Shulman 17 September 2015 04:09:38AM *  3 points [-]

Why does value pluralism justify reducing the importance of the far future? It seems unreasonable to me to discount the far future and I find it very implausible that beings in the far future don't have moral value.

Others think that we have special obligations to those with whom we have relationships or reciprocity, who we have harmed or been benefited by, or adopt person-affecting views although those are hard to make coherent. Others adopt value holism of various kinds, caring about other features of populations like the average and distribution, although for many parameterizations and empirical beliefs those still favor strong focus on the long-run.

(EDIT: I'd also add that even if I'm fairly confident about a 100-1000x effect size difference from inside an argument, when weighting donations I should take the outside view and not let these big effect sizes carry too much weight.)

Right, sounds good.

Comment author: MichaelDickens  (EA Profile) 17 September 2015 04:33:30AM 2 points [-]

I find all those views really implausible so I don't do anything to account for them. On the other hand, you seem to have a better grasp of utilitarianism than I do but you're less confident about its truth, which makes me think I should be less confident.

Comment author: MichaelDickens  (EA Profile) 17 September 2015 04:36:17AM 1 point [-]

On his old blog Scott talks about how there are some people who can argue circles around him on certain subjects. I feel like you can do this to me on cause prioritization. Like no matter what position I take, you can poke tons of holes in it and convince me that I'm wrong.

Comment author: RyanCarey 17 September 2015 01:41:00PM *  4 points [-]

The fact that Carl points out flaws with arguments on all sides makes him more trustworthy!