Comment author: MichaelPlant 14 June 2017 10:45:21AM 0 points [-]

I agree on the writing being scattered. Task 1) is: get the writing on a given topic into a single place. That still leaves task 2) get all those collated writings into a single place.

On 2) it strikes me it would be good if CEA compiled a list of EA-relevant resources. An alternative would be someone be someone creating an edited collection of the best recent EA work on a range of topics. Or if we have an academic EA global, then treating that like a normal academic conference and publishing the presented papers.

Comment author: MichaelPlant 14 June 2017 10:16:36AM 2 points [-]

This is a purposefully vague warning for reasons that should not need to be said. Unfortunately, this forces this post to discuss these issues at a higher level of generality than might be ideal, and so there is definitely merit to the claim that this post only deals in generalisations. For this reason, this post should be understood more as an outline of an argument than as an actual crystalized argument

I found this post unhelpful and this part of it particularly so. Your overall point - "don't concede too much on important topics" - seems reasonable, but as I don't know what topics you're referring to, or what would count as 'too much' on those, I can't learn anything.

More generally, I find EAs who post things of the flavour "we shouldn't do X, but I can't tell you what I mean by X for secret reasons" annoying, alienating and culty and wish people wouldn't do it.

Comment author: MichaelPlant 09 June 2017 11:52:17AM 4 points [-]

This is all very exciting and I'm glad to see this is happening.

A couple of comments.

  1. The deadline for this is only three weeks, which seems quite tight.

  2. Could you give examples of the types of things you wouldn't fund or are very unlikely to fund? That avoids you getting lots of applications you don't want as well as people spending time submitting applications that will get rejected. For instance, would/could CEA provide seed funding for any for altruistic for profit organisations, like start-ups? Asking for a friend...

Comment author: Benito 04 June 2017 09:26:01PM *  1 point [-]

Yup! I've always seen 'animals v poverty v xrisk' not as three random areas, but three optimal areas given different philosophies:

poverty = only short term

animals = all conscious suffering matters + only short term

xrisk = long term matters

I'd be happy to see other philosophical positions considered.

Comment author: MichaelPlant 04 June 2017 10:31:32PM 3 points [-]

mostly agree, but you need a couple more assumptions to make that work.

poverty = person affecting view of population ethics or pure time discounting + belief poverty relief is the best way to increase well-being (I'm not sure it is. See my old forum post

Also, you could split poverty (things like Give Directly) from global health (AMF, SCI, etc.). You probably need a person-affecting view or pure time discounting if you support health over x-risk, unless you're just really sceptical about x-risks.

animals = I think animals are only a priority if you believe in a impersonal population ethic like totalism (maximise happiness over history of the universe, hence creating happy life is good), and you either do pure time discounting or you're suffering focused (i.e. unhappiness counts more than happiness)

If you're a straightforward presentist (a person-affecting population ethic on which only presently existing things count), which is what you might mean by 'short term'. You probably shouldn't focus on animals. Why? Animal welfare reforms don't benefit the presently existing animals, but the next generation of animals, who don't count on presentism as they don't presently exist.

Comment author: Kerry_Vaughan 02 June 2017 05:02:58PM 1 point [-]

Hey Michael, great ideas. I'd like to see all of these as well. My concern would just be whether there are charities available to fund in the areas. Do you have some potential grant recipients for these funds in mind?

Comment author: MichaelPlant 04 June 2017 04:13:24PM *  1 point [-]

Hello Kerry. Building on what Michael Dickens said, I now think the funds need to be more tightly specified before we can pick the most promising recipients within each. For instance, imagine we have a 'systemic change' fund, presumably a totalist systemic change fund would be different from a person-affecting, life-improving one. It's possible they might consider the same things top targets, but more work would be required to show that.

Narrowing down then:

Suppose we had life-improving fund using safe bets. I think charities like Strong Minds and Basic Needs (mental health orgs) are good contenders, although I can't comment on their organisational efficiency.

Suppose we have a life-improving fund doing systemic change. I assume this would be trying to bring about political change via government policies, either at the domestic or international level. I can think of a few areas that look good, such as mental health policy, increasing access to pain relief in developing countries, and international drug policy reform. However, I can't name and exalt particular orgs as I haven't narrowed down to what I think the most promising sub-causes are yet.

Suppose we had a life-saving moonshots fund. If this is going to be different the one above, I imagine this would be looking for start ups, maybe a bit like EA Ventures did. I can't think of anything relevant to suggest here apart from the start up I work on (the quality of which I can't hope to be objective about). Perhaps this fund could be looking at starting new charities too, rather than looking to fund existing ones.

I don't think not knowing who you'd give money to in advance is a reason not to pursue this further. For instance, I would consider donating to some type of moonshots fund precisely because I had no idea where the money would go and I'd like to see someone (else) try to figure it out. Once they'd made their we could build on their analysis and learn stuff.

Comment author: MichaelDickens  (EA Profile) 03 June 2017 06:46:47AM 4 points [-]

Now that you mention it, I think this would be a much more interesting way to divide up funds. I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a "safe, guaranteed to help" fund and a "moonshot" fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense). I would much rather donate to a fund run by a person who shares my values and beliefs than a fund for a specific cause area, because I'm likely to change my mind about which cause area is best, and perhaps the fund manager will, too, and that's okay.

Some possible axes:

  1. live-improving vs. life-saving (or, similarly, total view vs. person-affecting view)
  2. safe bets vs. moonshots
  3. suffering-focused vs. "classical"
  4. short-term vs. far future

Although having all possible combinations just along these axes would require 16 funds so in practice this won't work exactly as I've described.

Comment author: MichaelPlant 03 June 2017 05:17:54PM 0 points [-]

Good point on the axes. I think we would, in practice, get less than 16 funds for a couple of reasons.

  1. It's hard to see how some funds would, in practice, differ. For instance, is AI safety a moonshot or a safe bet if we're thinking about the future?

  2. The life-saving vs life-improving point only seems relevant if you've already signed up to a person-affecting view. Talking about 'saving lives' of people in the far future is a bit strange (although you could distinguish between a far future fund that tried to reduce X-risk vs one that invested in ways to make future people happier, such as genetic engineering).

Comment author: Halstead 01 June 2017 05:46:54PM 3 points [-]

Brief note: one important norm of considerateness which it is easy to neglect is not talking about people behind their back. I think there are strong consequentialist reasons not to do this: it makes you feel bad, it's hard to remain authentic when you next see that person, it makes others think a lot less of you

Comment author: MichaelPlant 02 June 2017 12:22:43PM 3 points [-]

I'm not sure I agree. There's an argument that gossip is potentially useful. Here's a quote from this paper:

Gossip also has implications for the overall functioning of the group in which individuals are embedded. For example, despite its harmful consequences for individuals, negative gossip might have beneficial consequences for group outcomes. Empirical studies have shown that negative gossip is used to socially control and sanction uncooperative behavior within groups (De Pinninck et al., 2008; Elias and Scotson, 1965 ; Merry, 1984). Individuals often cooperate and comply with group norms simply because they fear reputation-damaging gossip and subsequent ostracism

Comment author: MichaelPlant 02 June 2017 12:13:34PM 6 points [-]

Thanks for this Kerry, very much appreciate the update.

Three funds I'd like to see:

  1. The 'life-improving' or 'quality of life'-type fund that tries to find the best way to increase the happiness of people whilst they are alive. My view on morality leads me to think that is what matters most. This is the area I do my research on too, so I'd be very enthusiastic to help whoever the fund manager was.

  2. A systemic change fund. Part of this would be reputational (i.e. no one could then complain EAs don't take systemic change seriously) another part would be that I'd really like to see what the fund manager would choose to give money too if it had to go to systemic change. I feel that would be a valuable learning experience.

  3. A 'moonshots' fund that supported high-risk, potentially high-reward projects. For reasons similar to 2 I think this would be a really useful way for us to learn.

My general thought is the more funds the better, presuming you can find qualified enough people to run them. It has the positive effect of demonstrating EA's openess and diversity, which should mollify our critics. As mentioned, it provides chances to learn stuff. And it strikes me as unlikely new funds would divert much money away from the current options. Suppose we had an EA environmentalism fund. I assume people who would donate to that wouldn't have been donating to, say, the health fund already. They'd probably be supporting green charities instead.

Comment author: Owen_Cotton-Barratt 01 June 2017 09:18:22AM 4 points [-]

I think you're right that there's a failure mode of not asking people for things. I don't think that not-asking is in general the more considerate action, though -- often people would prefer to be given the opportunity to help (particularly if it feels like an opportunity rather than a demand).

I suppose the general point is: avoid the trap of overly-narrow interpretations of considerateness (just like it was good to avoid the trap of overly-narrow interpretations of consequences of actions).

Comment author: MichaelPlant 01 June 2017 10:23:06AM 1 point [-]

I agree. In which case it's possibly worth pointing out one part of considerateness is giving people the opportunity to help you, which they may well want to do anyway.

Comment author: MichaelPlant 01 June 2017 12:01:06AM 9 points [-]

Thanks for this. I think I strongly agree with what you've said. I've often noticed/got the impression that lots of EAs seem to be quite interested in pursuing their own projects and don't help each other very much. I worry this results in an altruistic tragedy of the commons problem; it would be better if people helped each other, but instead we chose to do our own good in our own way, resulting in less good done overall. Now I think of it, I've probably done this myself.

The real challenge, as you noted, is the following:

Being considerate often makes others happier to interact with you. That is normally good, but in some circumstances may not be desirable. If people find you extremely helpful when they ask you about frivolous matters, they will be incentivized to keep asking you about such matters. If you would prefer them not to, you should not be quite so helpful.

This seems to be quite a common problem, at least in academia. VIPs (very important people) will often deliberately make themselves unavailable so they have time for their own projects. Presumably, this has some reciprocal costs to the VIP too: if they had helped you, you would be more inclined to help them in future.

Relatedly, suppose people accept more considerate norms and so are reluctant to bother some VIP in case it's annoying to the VIP. We can imagine this backfiring. Take an extreme scenario where considerate people dont ask VIPs (or indeed anyone) else for help. This means people don't get help from the VIPs, and VIPs only get requests from inconsiderate people. Presuming these VIPs do grant some requests for help and the requests from considerate people would have done more good, this is now a worse situation overall. Extreme considerateness, call it 'meekness', seems bad.

It strikes me that it would be important to develop some community norms for navigating this difficulty. Perhaps people asking for help should be encouraged to do so, ask once or twice and leave the other person plenty of room to turn the request down. Perhaps receipients of requests should make a habit of replying to them but being polite and honest about their current capacity to help.

View more: Next