Comment author: TruePath 26 June 2017 07:33:29AM 2 points [-]

I simply don't believe that anyone is really (when it comes down to it) a presentist or a necessitist.

I don't think anyone is willing to actually endorse making choices which eliminate the headache of an existing person at the cost of bringing an infant into the world who will be tortured extensively for all time (but no one currently existing will see it and be made sad).

More generally, these views have more basic problems than anything considered here. Consider, for instance, the problem of personal identity. For either presentism or necessitism to be true there has to be a PRINCIPLED fact of the matter about when I become a new person if you slowly modify my brain structure until it matches that of some other possible (but not currently actual) person. The right answer to these Thesus's ship style worries is to shrug and say there isn't any fact of the matter but the presentist can't take that line because there are huge moral implications to where we draw the line for them.

Moreover, both these views have serious puzzles about what to say about when an individual exists. Is it when they actually generate qualia (if not you risk saying that the fact they will exist in the future actually means they exist now)? How do we even know when that happens?

Comment author: MichaelPlant 26 June 2017 10:12:16AM 1 point [-]

I'm probably a necessitiarian, and many (most?) people implicitly hold person-affecting views. However, that's besides the point. I'm neither defending nor evaluating person-affecting views, or indeed any positions in population axiology. As I mentioned, and is widely accepted by philosophers, all the views in population ethics have weird outcomes.

FWIW, and this is unrelated to anything said above, nothing about person-affecting views need rely on person identity. The entity of concern can just be something that is able to feel happiness or unhappiness. This is typically the same line total utilitarians take. What person-affectors and totalism disagree about is whether (for one reason on another) creating new entities is good.

In fact, all the problems you've raised for person-affecting views also arise for totalists. To see this, let's imagine a scenario where a mad scientist is creating a brain inside a body, where the body is being shocked with electricity. Suppose he grows it to a certain size, takes bits out, shrinks it, grows it again, etc. Now the totalist needs to take a stance on how much harm the scientist is doing and draw a line somewhere. The totalist and the person-affector can draw the line in the same place, wherever that is.

Whatever puzzles qualia poses for person-affecting views also apply to totalism (at least, the part of morality concerned with subjective experience).

Comment author: MichaelPlant 14 June 2017 10:45:21AM 0 points [-]

I agree on the writing being scattered. Task 1) is: get the writing on a given topic into a single place. That still leaves task 2) get all those collated writings into a single place.

On 2) it strikes me it would be good if CEA compiled a list of EA-relevant resources. An alternative would be someone be someone creating an edited collection of the best recent EA work on a range of topics. Or if we have an academic EA global, then treating that like a normal academic conference and publishing the presented papers.

Comment author: MichaelPlant 14 June 2017 10:16:36AM 2 points [-]

This is a purposefully vague warning for reasons that should not need to be said. Unfortunately, this forces this post to discuss these issues at a higher level of generality than might be ideal, and so there is definitely merit to the claim that this post only deals in generalisations. For this reason, this post should be understood more as an outline of an argument than as an actual crystalized argument

I found this post unhelpful and this part of it particularly so. Your overall point - "don't concede too much on important topics" - seems reasonable, but as I don't know what topics you're referring to, or what would count as 'too much' on those, I can't learn anything.

More generally, I find EAs who post things of the flavour "we shouldn't do X, but I can't tell you what I mean by X for secret reasons" annoying, alienating and culty and wish people wouldn't do it.

Comment author: MichaelPlant 09 June 2017 11:52:17AM 4 points [-]

This is all very exciting and I'm glad to see this is happening.

A couple of comments.

  1. The deadline for this is only three weeks, which seems quite tight.

  2. Could you give examples of the types of things you wouldn't fund or are very unlikely to fund? That avoids you getting lots of applications you don't want as well as people spending time submitting applications that will get rejected. For instance, would/could CEA provide seed funding for any for altruistic for profit organisations, like start-ups? Asking for a friend...

Comment author: Benito 04 June 2017 09:26:01PM *  1 point [-]

Yup! I've always seen 'animals v poverty v xrisk' not as three random areas, but three optimal areas given different philosophies:

poverty = only short term

animals = all conscious suffering matters + only short term

xrisk = long term matters

I'd be happy to see other philosophical positions considered.

Comment author: MichaelPlant 04 June 2017 10:31:32PM 3 points [-]

mostly agree, but you need a couple more assumptions to make that work.

poverty = person affecting view of population ethics or pure time discounting + belief poverty relief is the best way to increase well-being (I'm not sure it is. See my old forum post

Also, you could split poverty (things like Give Directly) from global health (AMF, SCI, etc.). You probably need a person-affecting view or pure time discounting if you support health over x-risk, unless you're just really sceptical about x-risks.

animals = I think animals are only a priority if you believe in a impersonal population ethic like totalism (maximise happiness over history of the universe, hence creating happy life is good), and you either do pure time discounting or you're suffering focused (i.e. unhappiness counts more than happiness)

If you're a straightforward presentist (a person-affecting population ethic on which only presently existing things count), which is what you might mean by 'short term'. You probably shouldn't focus on animals. Why? Animal welfare reforms don't benefit the presently existing animals, but the next generation of animals, who don't count on presentism as they don't presently exist.

Comment author: Kerry_Vaughan 02 June 2017 05:02:58PM 1 point [-]

Hey Michael, great ideas. I'd like to see all of these as well. My concern would just be whether there are charities available to fund in the areas. Do you have some potential grant recipients for these funds in mind?

Comment author: MichaelPlant 04 June 2017 04:13:24PM *  1 point [-]

Hello Kerry. Building on what Michael Dickens said, I now think the funds need to be more tightly specified before we can pick the most promising recipients within each. For instance, imagine we have a 'systemic change' fund, presumably a totalist systemic change fund would be different from a person-affecting, life-improving one. It's possible they might consider the same things top targets, but more work would be required to show that.

Narrowing down then:

Suppose we had life-improving fund using safe bets. I think charities like Strong Minds and Basic Needs (mental health orgs) are good contenders, although I can't comment on their organisational efficiency.

Suppose we have a life-improving fund doing systemic change. I assume this would be trying to bring about political change via government policies, either at the domestic or international level. I can think of a few areas that look good, such as mental health policy, increasing access to pain relief in developing countries, and international drug policy reform. However, I can't name and exalt particular orgs as I haven't narrowed down to what I think the most promising sub-causes are yet.

Suppose we had a life-improving moonshots fund. If this is going to be different the one above, I imagine this would be looking for start ups, maybe a bit like EA Ventures did. I can't think of anything relevant to suggest here apart from the start up I work on (the quality of which I can't hope to be objective about). Perhaps this fund could be looking at starting new charities too, rather than looking to fund existing ones.

I don't think not knowing who you'd give money to in advance is a reason not to pursue this further. For instance, I would consider donating to some type of moonshots fund precisely because I had no idea where the money would go and I'd like to see someone (else) try to figure it out. Once they'd made their we could build on their analysis and learn stuff.

Comment author: MichaelDickens  (EA Profile) 03 June 2017 06:46:47AM 4 points [-]

Now that you mention it, I think this would be a much more interesting way to divide up funds. I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a "safe, guaranteed to help" fund and a "moonshot" fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense). I would much rather donate to a fund run by a person who shares my values and beliefs than a fund for a specific cause area, because I'm likely to change my mind about which cause area is best, and perhaps the fund manager will, too, and that's okay.

Some possible axes:

  1. live-improving vs. life-saving (or, similarly, total view vs. person-affecting view)
  2. safe bets vs. moonshots
  3. suffering-focused vs. "classical"
  4. short-term vs. far future

Although having all possible combinations just along these axes would require 16 funds so in practice this won't work exactly as I've described.

Comment author: MichaelPlant 03 June 2017 05:17:54PM 0 points [-]

Good point on the axes. I think we would, in practice, get less than 16 funds for a couple of reasons.

  1. It's hard to see how some funds would, in practice, differ. For instance, is AI safety a moonshot or a safe bet if we're thinking about the future?

  2. The life-saving vs life-improving point only seems relevant if you've already signed up to a person-affecting view. Talking about 'saving lives' of people in the far future is a bit strange (although you could distinguish between a far future fund that tried to reduce X-risk vs one that invested in ways to make future people happier, such as genetic engineering).

Comment author: Halstead 01 June 2017 05:46:54PM 3 points [-]

Brief note: one important norm of considerateness which it is easy to neglect is not talking about people behind their back. I think there are strong consequentialist reasons not to do this: it makes you feel bad, it's hard to remain authentic when you next see that person, it makes others think a lot less of you

Comment author: MichaelPlant 02 June 2017 12:22:43PM 3 points [-]

I'm not sure I agree. There's an argument that gossip is potentially useful. Here's a quote from this paper:

Gossip also has implications for the overall functioning of the group in which individuals are embedded. For example, despite its harmful consequences for individuals, negative gossip might have beneficial consequences for group outcomes. Empirical studies have shown that negative gossip is used to socially control and sanction uncooperative behavior within groups (De Pinninck et al., 2008; Elias and Scotson, 1965 ; Merry, 1984). Individuals often cooperate and comply with group norms simply because they fear reputation-damaging gossip and subsequent ostracism

Comment author: MichaelPlant 02 June 2017 12:13:34PM 7 points [-]

Thanks for this Kerry, very much appreciate the update.

Three funds I'd like to see:

  1. The 'life-improving' or 'quality of life'-type fund that tries to find the best way to increase the happiness of people whilst they are alive. My view on morality leads me to think that is what matters most. This is the area I do my research on too, so I'd be very enthusiastic to help whoever the fund manager was.

  2. A systemic change fund. Part of this would be reputational (i.e. no one could then complain EAs don't take systemic change seriously) another part would be that I'd really like to see what the fund manager would choose to give money too if it had to go to systemic change. I feel that would be a valuable learning experience.

  3. A 'moonshots' fund that supported high-risk, potentially high-reward projects. For reasons similar to 2 I think this would be a really useful way for us to learn.

My general thought is the more funds the better, presuming you can find qualified enough people to run them. It has the positive effect of demonstrating EA's openess and diversity, which should mollify our critics. As mentioned, it provides chances to learn stuff. And it strikes me as unlikely new funds would divert much money away from the current options. Suppose we had an EA environmentalism fund. I assume people who would donate to that wouldn't have been donating to, say, the health fund already. They'd probably be supporting green charities instead.

Comment author: Owen_Cotton-Barratt 01 June 2017 09:18:22AM 4 points [-]

I think you're right that there's a failure mode of not asking people for things. I don't think that not-asking is in general the more considerate action, though -- often people would prefer to be given the opportunity to help (particularly if it feels like an opportunity rather than a demand).

I suppose the general point is: avoid the trap of overly-narrow interpretations of considerateness (just like it was good to avoid the trap of overly-narrow interpretations of consequences of actions).

Comment author: MichaelPlant 01 June 2017 10:23:06AM 1 point [-]

I agree. In which case it's possibly worth pointing out one part of considerateness is giving people the opportunity to help you, which they may well want to do anyway.

View more: Next