11

MichaelPlant comments on A definition of effective altruism - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread.

Comment author: MichaelPlant 20 March 2018 09:50:23PM *  1 point [-]

The thing I find confusing about what Will says is

effective altruism is the project of using evidence and reason to figure out how to benefit others

I draw attention to 'benefit others'. Two of EA's main causes are farm animal welfare and reducing risks of human extinction. The former is about causing happy animals to exist rather than miserable ones, and the latter is about ensuring future humans exist (and trying to improve their welfare). But it doesn't really make sense to say that you can benefit someone by causing them to exist. It's certainly bizarre to say it's better for someone to exist than not to exist, because if the person doesn't exist there's no object to attach any predicates to. There's been a recent move by some philosophers, such as McMahan and Parfit, to say it can be good (without being better) for someone to exist, but that just seems like philosophical sleight of hand.

A great many EA philosophers, including I think Singer, MacAskill, Greaves, Ord either are totalists or very sympathetic to it. Totalis the view the best outcome is the one with the largest sum of lifetime well-being of all people - past, present, future and it's known as impersonal view in population ethics. Outcomes are not deemed good, on impersonal views, because they are good for anyone, or because the benefit anyone, they are good because there is more of the thing which is valuable, namely welfare.

So there's something fishy about saying EA is trying to benefit others when many EA activities, as mentioned, don't benefit anyone, and many EAs think we shouldn't, strictly, be trying to benefit people so much as realising more impersonal value. It would make more sense to replace 'benefit others as much as possible' with 'do as much good as possible'.

Comment author: Halstead 21 March 2018 10:27:39AM 4 points [-]

Does it harm someone to bring them into existence with a life of intense suffering?

Comment author: MichaelPlant 21 March 2018 11:38:44PM 0 points [-]

No. It might be impersonally bad though.

Comment author: Halstead 22 March 2018 10:09:38AM 0 points [-]

On your view, is it good for someone to prevent them from dying? Doesn't the same argument apply - if the person doesn't exist (is dead) there's no object to attach any predicates to.

Comment author: MichaelPlant 22 March 2018 11:34:23AM 0 points [-]

No, I also don't think it makes sense to say death is good or bad for people. Hence it's not true to say you benefit someone by keeping them alive. Given most people do want to say there's something good about keeping people alive, it makes sense to adopt an impersonal locution.

I'm not making an argument about what the correct account of ethics is here, I'm just making a point about the correct use of language. Will's definition can't be capturing what he means and is thus misleading, so 'do the most good' is better than 'benefit others'.

Comment author: Halstead 22 March 2018 03:33:52PM *  0 points [-]

In line with the above, one could stick with the EA definition and when asked to gloss it, say that different people understand benefitting others in different ways, some in such a way that creating new people etc counts as a benefit, others not. One downside of that is that it excludes the logically possible option of [your account of benefitting others; morality isn't all about benefitting others sometimes it's about impersonal good]

Comment author: Halstead 22 March 2018 02:20:04PM 0 points [-]

On your account, as you say, bringing people into a life of suffering doesn't harm them and preventing someone from dying doesn't benefit them. So, you could also have said "lots of EA activities are devoted to preventing people from dying and preventing lives of suffering, but neither activity benefits anyone, so the definition is wrong". This is a harder sell, and it seems like you're just criticising the definition of EA on the basis of a weird account of the meaning of 'benefitting others'.

I would guess that the vast majorty of people think that preventing a future life of suffering and saving lives both benefit somebody. If so, the vast majority of people would be committed to something which denies your criticism of the definition of EA.

Comment author: MichaelPlant 22 March 2018 05:12:13PM 0 points [-]

weird account of the meaning of 'benefitting others'.

The account might be uncommon in ordinarly langauge, but most philosophers accept creating lives doesn't benefit the created person. I'm at least being consistent and I don't think that consistency is objectionable. Calling it the view weird is unhelpful.

But suppose people typically think it's odd to claim you're benefiting someone by creating them. Then the stated definition of what's EAs about will be at least somewhat misleading to them when you explain EA in greater detail. Consistent with other things I've written on this forum, I think EA should take avoiding being misleading very seriously.

I'm not claiming this is a massive point, it just stuck out to me.

Comment author: Halstead 22 March 2018 05:58:27PM 0 points [-]

Agreed, weirdness accusation retracted.

I suppose there are two ways of securing neutrality - letting people pick their own meaning of 'doing good', and letting people pick their own meaning of 'benefiting others'

Comment author: Jamie_Harris 20 March 2018 11:14:32PM 3 points [-]

All points make sense. I find that when introducing the idea, however, people seem slightly confused by the idea of "doing as much good as possible" (I tend to use nearly identical phrasing). I think the idea seems too abstract to them, and I feel compelled to give some kind of more concrete example to help explain. Although I haven't really tried it out as an alternative, the idea of EA aiming to "benefit others" seems that it might be slightly clearer / more imaginable?

If you agree, this then raises the question of whether we should distinguish a definition of EA for "academic" and "outreach" / explanatory purposes. I'd argue that we should probably avoid separating a definition out for different contexts, so might need to keep thinking about how to word a definition which is clear, but also allows for nuance?

Comment author: arikagan 06 June 2018 01:06:38AM 1 point [-]

I'd agree with being hesitant to distinguish definitions of EA for "academic" and "outreach" purposes. It seems like that's asking for someone to use the wrong definition in the wrong context.

Comment author: Sanjay 21 March 2018 12:49:11PM 0 points [-]

Really? "doing as much good as possible" is confusing people? I tend to use that language, and I haven't noticed people getting confused (maybe I haven't been observant enough!)

Comment author: adamaero  (EA Profile) 22 March 2018 12:36:24AM 1 point [-]

Aren't you going further from the definition though?

Any short definition about EA by itself I find to be abstract. Most people I encounter assume it's about doing as much good small things as possible--or worse that it's a political philosophy (red/blue thinking). It's only when I give examples of myself or ask what their cause interests could be that they slowly break away from the abstract dictionary definitions.

Comment author: Jamie_Harris 02 April 2018 05:45:20PM 0 points [-]

Maybe "confusing" was the wrong word. But I tend to get the sense that people just have no idea what the concept means in practice when I say that, because its so vague / abstract. I'm guessing that people are thinking along the lines "what does he mean by 'doing good'? Surely he means something else / something more specific?" But I might just be misreading people slightly too.

Comment author: kbog  (EA Profile) 24 March 2018 09:40:40PM 0 points [-]

It's not confusing, but it's vague.

Comment author: MichaelPlant 21 March 2018 11:39:26PM 0 points [-]

maybe I haven't been observant enough

I've often observed your lack of observance :)

Comment author: kbog  (EA Profile) 24 March 2018 09:34:53PM *  1 point [-]

Literally everything that doesn't benefit existing beings fails to "benefit others", under your view. E.g. banning Agent Orange is not something that "benefits others". But banning Agent Orange, and lots of other things that benefit future generations, are regarded as benefiting others. This doesn't depend on the totalist view, it's largely uncontroversial in philosophy, and it's commonly assumed in the colloquial sense of benefiting others.

Philosophical sleight of hand would be to deny that we are benefiting others, something that colloquial and common sense views would affirm, just because of a technical philosophical point.

Comment author: stijnbruers 15 April 2018 07:01:19PM *  0 points [-]

I suggest to leave it up to the other persons to decide whether they are benefitted. For example: I have a happy, positive life, so I claim that my parents benefitted me when they caused my existence. So there does exist someone (me, now, in this situation) who claims to be benefitted by the choice of someone else (my parents 38 years ago), even if in the counterfactual I do not exist. So my parents made a choice for a situation where there is a bit more benefit added to the total benefit. If you disagree in the sense that you don't think you were benefitted by your parents when they chose for your existence (even when you are as happy as I am), then that means your parents did not create an extra bit if benefit and you were not benefitted. More on this here: https://stijnbruers.wordpress.com/2018/02/24/variable-critical-level-utilitarianism-as-the-solution-to-population-ethics/

Comment author: Tuukka_Sarvi 21 March 2018 01:06:35PM 0 points [-]

Good point. The choice of moral stance (ie. totalist, person-affecting, "moral uncertanitist" etc) is the biggest factor behind any preference ordering for allocation of resources and courses of action. Thus, it is possible that further rigorous study of ethics, if lesser uncertainty between the competing views or greater agreement among scholars is achieved, could bring very high returns in terms of impact

Comment author: Jan_Kulveit 20 March 2018 11:29:09PM 0 points [-]

I agree it may seem to point toward some "person-affecting views" which many EAs consider to be wrong.

Possibly the aim was to describe the motivation is altruistic?

The disadvantage of 'do as much good as possible' may be it would associtate EA with utilitarianism even more than it is.

I think about EA as a movement trying to answer a question "how to change the world for better most effectively with limited resources" in a rational way, and act on the answer. Which seems to me a tiny bit more open than 'do as much good as possible' as it requires just some sort of comparison on world-sates, while 'as much good as possible' seems to depend on more complex structure.