12

MichaelDickens comments on Discussion: Adding New Funds to EA Funds - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread. Show more comments above.

Comment author: MichaelDickens  (EA Profile) 03 June 2017 06:46:47AM 4 points [-]

Now that you mention it, I think this would be a much more interesting way to divide up funds. I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a "safe, guaranteed to help" fund and a "moonshot" fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense). I would much rather donate to a fund run by a person who shares my values and beliefs than a fund for a specific cause area, because I'm likely to change my mind about which cause area is best, and perhaps the fund manager will, too, and that's okay.

Some possible axes:

  1. live-improving vs. life-saving (or, similarly, total view vs. person-affecting view)
  2. safe bets vs. moonshots
  3. suffering-focused vs. "classical"
  4. short-term vs. far future

Although having all possible combinations just along these axes would require 16 funds so in practice this won't work exactly as I've described.

Comment author: Kerry_Vaughan 07 June 2017 04:02:29PM 2 points [-]

I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a "safe, guaranteed to help" fund and a "moonshot" fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense).

Great idea. This makes sense to me.

Comment author: Benito 04 June 2017 09:26:01PM *  1 point [-]

Yup! I've always seen 'animals v poverty v xrisk' not as three random areas, but three optimal areas given different philosophies:

poverty = only short term

animals = all conscious suffering matters + only short term

xrisk = long term matters

I'd be happy to see other philosophical positions considered.

Comment author: MichaelPlant 04 June 2017 10:31:32PM 3 points [-]

mostly agree, but you need a couple more assumptions to make that work.

poverty = person affecting view of population ethics or pure time discounting + belief poverty relief is the best way to increase well-being (I'm not sure it is. See my old forum post

Also, you could split poverty (things like Give Directly) from global health (AMF, SCI, etc.). You probably need a person-affecting view or pure time discounting if you support health over x-risk, unless you're just really sceptical about x-risks.

animals = I think animals are only a priority if you believe in a impersonal population ethic like totalism (maximise happiness over history of the universe, hence creating happy life is good), and you either do pure time discounting or you're suffering focused (i.e. unhappiness counts more than happiness)

If you're a straightforward presentist (a person-affecting population ethic on which only presently existing things count), which is what you might mean by 'short term'. You probably shouldn't focus on animals. Why? Animal welfare reforms don't benefit the presently existing animals, but the next generation of animals, who don't count on presentism as they don't presently exist.

Comment author: MichaelPlant 03 June 2017 05:17:54PM 0 points [-]

Good point on the axes. I think we would, in practice, get less than 16 funds for a couple of reasons.

  1. It's hard to see how some funds would, in practice, differ. For instance, is AI safety a moonshot or a safe bet if we're thinking about the future?

  2. The life-saving vs life-improving point only seems relevant if you've already signed up to a person-affecting view. Talking about 'saving lives' of people in the far future is a bit strange (although you could distinguish between a far future fund that tried to reduce X-risk vs one that invested in ways to make future people happier, such as genetic engineering).