28

MichaelDickens comments on My Cause Selection: Michael Dickens - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread. Show more comments above.

Comment author: MichaelDickens  (EA Profile) 17 September 2015 03:47:12AM *  0 points [-]

I will make a few edits to the document based on your suggestions, thanks.

I have a few more points worth discussing that I didn't want to put into the doc, so I'll comment here:

Some context for my view: while I think there is a strong case for this in terms of additive total utilitarianism, I think the dominance is far weaker when one takes value pluralism and normative uncertainty into account.

Why does value pluralism justify reducing the importance of the far future? It seems unreasonable to me to discount the far future and I find it very implausible that beings in the far future don't have moral value.

On the flow through effects of global poverty over animal charities: Flow through effects probably do outweigh short-term effects, which means economic growth, etc. may be more impactful than preventing factory farming. But flow-through effects are hard to predict. I meant that effective factory farming interventions probably have much better demonstrable short-term effects than human-focused interventions. Actions that affect wild animals probably have much bigger effects, but again we run into the problem of not knowing whether our actions are net positive or negative. I'd certainly love to see some robust evidence that doing X will have a huge positive effect on wild animals so I can seriously consider supporting X.

Shouldn't the same apply to your claims earlier in the post about the future being 1000+ times more important than present beings, and ACE recommendations being 100-1000+ times better than GiveWell Classic recommendations?

I didn't say ACE recommendations are two to three orders of magnitude better than GiveWell Classic, I said Open Phil's factory farming grant is plausibly two to three orders of magnitude better than GV's grant to GiveDirectly. There are three distinctions here. First, I'm fairly uncertain about this. Second, I expect Open Phil's grant to be based on more robust evidence than ACE-recommended charities, so I can feel more confident about its impact. Third, GD has similar strength of evidence to AMF and is probably about an order of magnitude less impactful. So the difference between a factory farming grant and AMF may be more like one or two orders of magnitude.

I weighted ACE recs as 2x as impactful as GiveWell recs; Open Phil hasn't produced anything on factory farming but my best guess is its results will have an expected value of maybe 2-5x that of current ACE recs (although I expect that the EV of ACE recs will get better if ACE gets substantially more funding), largely because a lot of ACE top charities' activities are probably useless--all their activities look reasonable, but the evidence supporting them is pretty weak so it's reasonable to expect that some will turn out not to be effective.

(EDIT: I'd also add that even if I'm fairly confident about a 100-1000x effect size difference from inside an argument, when weighting donations I should take the outside view and not let these big effect sizes carry too much weight.)

After further consideration I'm thinking I rated GiveWell recs too highly; their weighting should be more like 0.05 instead of 0.1. Most of REG-raised money for GiveWell top charities went to AMF, although this might shift more toward GiveDirectly in the future in which case I should give GW recs a lower weight. I would probably rate Open Phil factory farming grants at maybe 0.3-0.5, which is an order of magnitude higher than for GiveWell top charities.

When I change the GW rec weighting from 0.1 to 0.05, the weighted donations drop by about $0.1 per $1 to REG. That's enough to make REG look a little weaker, although not enough to make me want to give to an object-level charity instead.

EDIT 2: Actually I'm not sure I should downweight GW recs from 0.1 to 0.05 because I don't know that I have strong enough outside-the-argument confidence that MIRI is 20x better than AMF in expectation. This sort of thing is really hard to put explicit numbers on since my brain can't really tell the difference between MIRI being 10x better and 100x better in expectation. My subjective perception of the probabilities of MIRI being 10x better versus 100x better feel about the same.

Comment author: Carl_Shulman 17 September 2015 04:09:38AM *  3 points [-]

Why does value pluralism justify reducing the importance of the far future? It seems unreasonable to me to discount the far future and I find it very implausible that beings in the far future don't have moral value.

Others think that we have special obligations to those with whom we have relationships or reciprocity, who we have harmed or been benefited by, or adopt person-affecting views although those are hard to make coherent. Others adopt value holism of various kinds, caring about other features of populations like the average and distribution, although for many parameterizations and empirical beliefs those still favor strong focus on the long-run.

(EDIT: I'd also add that even if I'm fairly confident about a 100-1000x effect size difference from inside an argument, when weighting donations I should take the outside view and not let these big effect sizes carry too much weight.)

Right, sounds good.

Comment author: MichaelDickens  (EA Profile) 17 September 2015 04:33:30AM 2 points [-]

I find all those views really implausible so I don't do anything to account for them. On the other hand, you seem to have a better grasp of utilitarianism than I do but you're less confident about its truth, which makes me think I should be less confident.

Comment author: MichaelDickens  (EA Profile) 17 September 2015 04:36:17AM 1 point [-]

On his old blog Scott talks about how there are some people who can argue circles around him on certain subjects. I feel like you can do this to me on cause prioritization. Like no matter what position I take, you can poke tons of holes in it and convince me that I'm wrong.

Comment author: RyanCarey 17 September 2015 01:41:00PM *  4 points [-]

The fact that Carl points out flaws with arguments on all sides makes him more trustworthy!