Comment author: Halstead 22 March 2018 02:20:04PM 0 points [-]

On your account, as you say, bringing people into a life of suffering doesn't harm them and preventing someone from dying doesn't benefit them. So, you could also have said "lots of EA activities are devoted to preventing people from dying and preventing lives of suffering, but neither activity benefits anyone, so the definition is wrong". This is a harder sell, and it seems like you're just criticising the definition of EA on the basis of a weird account of the meaning of 'benefitting others'.

I would guess that the vast majorty of people think that preventing a future life of suffering and saving lives both benefit somebody. If so, the vast majority of people would be committed to something which denies your criticism of the definition of EA.

Comment author: MichaelPlant 22 March 2018 05:12:13PM 0 points [-]

weird account of the meaning of 'benefitting others'.

The account might be uncommon in ordinarly langauge, but most philosophers accept creating lives doesn't benefit the created person. I'm at least being consistent and I don't think that consistency is objectionable. Calling it the view weird is unhelpful.

But suppose people typically think it's odd to claim you're benefiting someone by creating them. Then the stated definition of what's EAs about will be at least somewhat misleading to them when you explain EA in greater detail. Consistent with other things I've written on this forum, I think EA should take avoiding being misleading very seriously.

I'm not claiming this is a massive point, it just stuck out to me.

Comment author: Halstead 22 March 2018 10:09:38AM 0 points [-]

On your view, is it good for someone to prevent them from dying? Doesn't the same argument apply - if the person doesn't exist (is dead) there's no object to attach any predicates to.

Comment author: MichaelPlant 22 March 2018 11:34:23AM 0 points [-]

No, I also don't think it makes sense to say death is good or bad for people. Hence it's not true to say you benefit someone by keeping them alive. Given most people do want to say there's something good about keeping people alive, it makes sense to adopt an impersonal locution.

I'm not making an argument about what the correct account of ethics is here, I'm just making a point about the correct use of language. Will's definition can't be capturing what he means and is thus misleading, so 'do the most good' is better than 'benefit others'.

Comment author: Sanjay 21 March 2018 12:49:11PM 0 points [-]

Really? "doing as much good as possible" is confusing people? I tend to use that language, and I haven't noticed people getting confused (maybe I haven't been observant enough!)

Comment author: MichaelPlant 21 March 2018 11:39:26PM 0 points [-]

maybe I haven't been observant enough

I've often observed your lack of observance :)

Comment author: Halstead 21 March 2018 10:27:39AM 3 points [-]

Does it harm someone to bring them into existence with a life of intense suffering?

Comment author: MichaelPlant 21 March 2018 11:38:44PM 0 points [-]

No. It might be impersonally bad though.

Comment author: MichaelPlant 20 March 2018 09:50:23PM *  0 points [-]

The thing I find confusing about what Will says is

effective altruism is the project of using evidence and reason to figure out how to benefit others

I draw attention to 'benefit others'. Two of EA's main causes are farm animal welfare and reducing risks of human extinction. The former is about causing happy animals to exist rather than miserable ones, and the latter is about ensuring future humans exist (and trying to improve their welfare). But it doesn't really make sense to say that you can benefit someone by causing them to exist. It's certainly bizarre to say it's better for someone to exist than not to exist, because if the person doesn't exist there's no object to attach any predicates to. There's been a recent move by some philosophers, such as McMahan and Parfit, to say it can be good (without being better) for someone to exist, but that just seems like philosophical sleight of hand.

A great many EA philosophers, including I think Singer, MacAskill, Greaves, Ord either are totalists or very sympathetic to it. Totalis the view the best outcome is the one with the largest sum of lifetime well-being of all people - past, present, future and it's known as impersonal view in population ethics. Outcomes are not deemed good, on impersonal views, because they are good for anyone, or because the benefit anyone, they are good because there is more of the thing which is valuable, namely welfare.

So there's something fishy about saying EA is trying to benefit others when many EA activities, as mentioned, don't benefit anyone, and many EAs think we shouldn't, strictly, be trying to benefit people so much as realising more impersonal value. It would make more sense to replace 'benefit others as much as possible' with 'do as much good as possible'.

Comment author: MichaelPlant 09 March 2018 12:02:31PM 0 points [-]

This sounds promising. Question: what sort of engagement do you want from the EA world? It's not clear to me what you're after. Are you after headline policy suggestions, detailed proposals, people to discuss ideas with, something else?

Comment author: MichaelPlant 21 February 2018 03:31:26PM 6 points [-]

I thought this was very interesting, thanks for writing up. Two comments

  1. It was useful to have a list of reasons why you think the EV of the future could be around zero, but it still found it quite vague/hard to imagine - why exactly would more powerful minds be mistreating less powerful minds? etc. - so I'd would have liked to see that sketched in slightly more depth.

  2. It's not obvious to me it's correct/charitable to draw the neglectedness of MCE so narrowly. Can't we conceive of a huge ammount of moral philosophy, and well as social activism, both new and old, as MCE? Isn't all EA outreach an indirect form of MCE?

Comment author: Daniel_Eth 17 February 2018 02:55:32AM 0 points [-]

As long as we're talking about medical research from an EA perspective, I think we should consider funding therapies for reversing aging itself. In terms of scale, aging undoubtedly is by far the largest (100,000 people die from age-related diseases every single day, not to mention the psychological toll that aging causes). Aging is also quite neglected - very few researchers focus on trying to reverse it. Tractability is of course a concern here, but I think this point is a bit nuanced. Achieving a full and total cure for aging would clearly be quite hard. But what about a partial cure? What about a therapy that made 70 year olds feel and act like they were 50, and with an additional 20 years of life expectancy? Such a treatment may be much more tractable. At least a large part of aging seems to be due to several common mechanisms (such as DNA damage, accumulation of senescent cells, etc), and reversing some of these mechanisms (such as by restoring DNA, clearing the body of senescent cells, etc) might allow for such a treatment. Even the journal Nature (one of the 2 most prestigious science journals in the world) had a recent piece saying as much:

If anyone is interesting in funding research toward curing aging, the SENS Foundation ( is arguably your best bet.

Comment author: MichaelPlant 20 February 2018 09:41:35AM 1 point [-]

unsure why this was downvoted. I assume because many EAs think X-risk is a better bet than aging research. That would be a reason to disagree with a comment, but not to downvote, which is snarky. I upvoted for balance.

Comment author: HaukeHillebrandt 19 February 2018 10:54:33AM *  1 point [-]

Great question!

In theory, mission hedging can always beat maximizing expected returns in terms of maximizing expected utility.

In practice, I think the main considerations here are a) whether you can find a suitable hedge in practice and b) whether you are sufficiently certain that a cause is important, because you give up the flexibility of being cause neutral and tie yourself financially to a particular cause. You can remain cause neutral by trying to maximize expected financial returns.

To me, the two most promising applications seem to be AI safety, where people are often quite certain that it is one of the most pressing causes (as per maxipok or preventing s-risk), and it seems as if investing in AI companies is plausible to me (but note Kit Harris objections in the comment section here). And then also using mission hedging for ones career might be good by either joining the military, the secret service, or an AI company for the reasons outlined above i.e. historically people in the military have sometimes had outsized impact.

Comment author: MichaelPlant 19 February 2018 03:07:23PM 1 point [-]

Okay, but can you explain why it would beat maximise expected returns?

Here's the thought: maximise expected returns gives me more money than mission hedging. That extra money is a pro tanto reason to think the former is better.

However, mission hedging seems to have advantages, such as in shareholder activism: if evil company X makes money, I will have more cash to undermine it, and other shareholders will know this, thus suppressing X's value. This is a pro tanto reason to favour mission hedging.

How should I think about weighing these pro tanto reasons against one another to establish the best strategy? Apologies if I've missed something here, thinking this way is new to me.

Comment author: MichaelPlant 18 February 2018 10:31:01PM 3 points [-]

I thought this was super interesting, thanks Hauke. The question that sprang to mind: in what circumstances would it do more good to engage in mission hedging vs trying to maximise expected returns?

View more: Next