28

Carl_Shulman comments on My Cause Selection: Michael Dickens - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread. Show more comments above.

Comment author: Carl_Shulman 17 September 2015 02:30:59AM *  8 points [-]

My comment was too long, so here's the rest:

it’s plausible that the factory farming grant will do perhaps two or three orders of magnitude more good per dollar than the GiveDirectly grant and should be receiving a lot more money

Is this something like unweighted QALYs per dollar? If you are analyzing in terms of long-run effects on the animal population, as elsewhere in the piece, those QALYs are a red herring. E.g. a tiny increase in economic activity very faintly expediting economic growth will overwhelm the direct QALYs involved with future populations. From the long-run point of view things like the changes in economic output, human populations, carbon emissions, human attitudes about other animals, and such would be the relevant metrics and don't scale with QALYs (this is made blatantly clear if one consider things like flies and ants). From the tiny-animal focus (with no accounting for differences in nervous system scale), the large farm animals will be neglible compared to various effects on tiny wild animals. If one considers neural processes within and across animals, then the numbers will be far less extreme.

Now, as I said at the start of this comment, normative pluralism and such would suggest not allowing complete dominance of long-run QALYs over current ones, but comparisons in terms of QALYs here don't track the purported long-run impacts, and if one focused only on unweighted animal QALYs without worrying about long-run consequences it would lead one away from farm animals towards wild animals.

Good Ventures currently pays for GiveWell’s operating expenses,

Not true. Previously GiveWell had capped Good Ventures contributions at 20% of GiveWell's budget. Recently they changed it to 20% for GiveWell's top charities work, and 50% for the Open Philanthropy Project (reasoning that Good Ventures is the main customer of the latter at this time, so it is reasonable for it to bear a larger share).

I don’t know how much influence Good Ventures has over GiveWell’s activities. I’m considerably less confident in Good Ventures’ cause prioritization skills than GiveWell’s, so I prefer that GiveWell has primary control over Open Phil’s cause selection, although I don’t know how much this matters. I wish GiveWell were more transparent about this.

Well nothing is going to force Good Ventures to hand over billions of dollars if it disagrees with the OpenPhil recommendations (and last year there was some disagreement between GW and GV about allocations to the different global poverty charities). But this does seem like a serious consideration to support outside donation to OpenPhil, and I think you may be underrating this donation option.

If ACE discovers that popular interventions are much less effective than previous evidence showed, to the point where GW top charities look more effective and more donors start giving to GW charities instead, the maximum impact here would be the impact had by marginal donations to GW top charities.

You only consider the case where it finds that all the current popular animal interventions are very poor. If many or most but not all are, then it could support productive reallocation from the ones that don't work as well to the ones that work better, potentially multiplying effectiveness severalfold. That's in fact the usual justification given by people in the animal charity community for doing this kind of research, but doesn't appear at all here. So I think the whole discussion of #3 has gone awry. Also the 'several orders of magnitude' claim appears again here, and the issues with QALYs vs metrics that better track long-run changes (e.g. attitude changes, population changes, legal changes) recur.

Charity Science has successfully raised money for GiveWell top charities (it claims to have raised $9 for every $1 spent)

Although note that that is valuing staff time at below minimum wage. If you valued it at closer to opportunity cost (or salaries at other orgs) the ratio would be far lower. I still think Charity Science is promising and deserving of support because of the knowledge it has produced, and I suspect its fundraising ratios will improve, but at the moment the ratio of EA resources put in to fundraising success is still on the lower end. See this discussion on the EA facebook group.

I decided to keep all the weights relatively close together because I do not have strong confidence about how much good each of these categories do. I might be able to make an inside-view argument that, say, MIRI is 1000x more effective than anything else on this list, but from the outside view, I shouldn’t let such an argument carry too much weight.

Shouldn't the same caveat apply to your suggestions earlier in the post about the future being 1000+ times more important than present beings?

Comment author: MichaelDickens  (EA Profile) 17 September 2015 03:47:12AM *  0 points [-]

I will make a few edits to the document based on your suggestions, thanks.

I have a few more points worth discussing that I didn't want to put into the doc, so I'll comment here:

Some context for my view: while I think there is a strong case for this in terms of additive total utilitarianism, I think the dominance is far weaker when one takes value pluralism and normative uncertainty into account.

Why does value pluralism justify reducing the importance of the far future? It seems unreasonable to me to discount the far future and I find it very implausible that beings in the far future don't have moral value.

On the flow through effects of global poverty over animal charities: Flow through effects probably do outweigh short-term effects, which means economic growth, etc. may be more impactful than preventing factory farming. But flow-through effects are hard to predict. I meant that effective factory farming interventions probably have much better demonstrable short-term effects than human-focused interventions. Actions that affect wild animals probably have much bigger effects, but again we run into the problem of not knowing whether our actions are net positive or negative. I'd certainly love to see some robust evidence that doing X will have a huge positive effect on wild animals so I can seriously consider supporting X.

Shouldn't the same apply to your claims earlier in the post about the future being 1000+ times more important than present beings, and ACE recommendations being 100-1000+ times better than GiveWell Classic recommendations?

I didn't say ACE recommendations are two to three orders of magnitude better than GiveWell Classic, I said Open Phil's factory farming grant is plausibly two to three orders of magnitude better than GV's grant to GiveDirectly. There are three distinctions here. First, I'm fairly uncertain about this. Second, I expect Open Phil's grant to be based on more robust evidence than ACE-recommended charities, so I can feel more confident about its impact. Third, GD has similar strength of evidence to AMF and is probably about an order of magnitude less impactful. So the difference between a factory farming grant and AMF may be more like one or two orders of magnitude.

I weighted ACE recs as 2x as impactful as GiveWell recs; Open Phil hasn't produced anything on factory farming but my best guess is its results will have an expected value of maybe 2-5x that of current ACE recs (although I expect that the EV of ACE recs will get better if ACE gets substantially more funding), largely because a lot of ACE top charities' activities are probably useless--all their activities look reasonable, but the evidence supporting them is pretty weak so it's reasonable to expect that some will turn out not to be effective.

(EDIT: I'd also add that even if I'm fairly confident about a 100-1000x effect size difference from inside an argument, when weighting donations I should take the outside view and not let these big effect sizes carry too much weight.)

After further consideration I'm thinking I rated GiveWell recs too highly; their weighting should be more like 0.05 instead of 0.1. Most of REG-raised money for GiveWell top charities went to AMF, although this might shift more toward GiveDirectly in the future in which case I should give GW recs a lower weight. I would probably rate Open Phil factory farming grants at maybe 0.3-0.5, which is an order of magnitude higher than for GiveWell top charities.

When I change the GW rec weighting from 0.1 to 0.05, the weighted donations drop by about $0.1 per $1 to REG. That's enough to make REG look a little weaker, although not enough to make me want to give to an object-level charity instead.

EDIT 2: Actually I'm not sure I should downweight GW recs from 0.1 to 0.05 because I don't know that I have strong enough outside-the-argument confidence that MIRI is 20x better than AMF in expectation. This sort of thing is really hard to put explicit numbers on since my brain can't really tell the difference between MIRI being 10x better and 100x better in expectation. My subjective perception of the probabilities of MIRI being 10x better versus 100x better feel about the same.

Comment author: Carl_Shulman 17 September 2015 04:09:38AM *  3 points [-]

Why does value pluralism justify reducing the importance of the far future? It seems unreasonable to me to discount the far future and I find it very implausible that beings in the far future don't have moral value.

Others think that we have special obligations to those with whom we have relationships or reciprocity, who we have harmed or been benefited by, or adopt person-affecting views although those are hard to make coherent. Others adopt value holism of various kinds, caring about other features of populations like the average and distribution, although for many parameterizations and empirical beliefs those still favor strong focus on the long-run.

(EDIT: I'd also add that even if I'm fairly confident about a 100-1000x effect size difference from inside an argument, when weighting donations I should take the outside view and not let these big effect sizes carry too much weight.)

Right, sounds good.

Comment author: MichaelDickens  (EA Profile) 17 September 2015 04:33:30AM 2 points [-]

I find all those views really implausible so I don't do anything to account for them. On the other hand, you seem to have a better grasp of utilitarianism than I do but you're less confident about its truth, which makes me think I should be less confident.

Comment author: MichaelDickens  (EA Profile) 17 September 2015 04:36:17AM 1 point [-]

On his old blog Scott talks about how there are some people who can argue circles around him on certain subjects. I feel like you can do this to me on cause prioritization. Like no matter what position I take, you can poke tons of holes in it and convince me that I'm wrong.

Comment author: RyanCarey 17 September 2015 01:41:00PM *  4 points [-]

The fact that Carl points out flaws with arguments on all sides makes him more trustworthy!