JanBrauner comments on Is Effective Altruism fundamentally flawed? - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (72)

You are viewing a single comment's thread.

Comment author: JanBrauner 13 March 2018 09:02:02AM 4 points [-]

You think aggregating welfare between individuals is a flawed approach, such that you are indifferent between alleviating an equal amount of suffering for 1 or each of a million people.

You conclude that these values recommend giving to charities that directly address the sources of most intense individual suffering, and that between them, one should not choose by cost-effectiveness, but randomly. One should not give to say GiveDirectly, which does not directly tackle the most intense suffering.

This conclusion seems correct only for clear-cut textbook examples. In the real world, I think, your values fail to recommend anything. You can never know for certain how many people you are going help. Everything is probabilities and expected value:

Say, for the sake of the argument, you think that severe depression is the cause of most intense individual suffering. You could give your $10.000 to a mental health charity, and they will in expectation prevent 100 people (made up number) from getting severe depression.

However, if you give $10.000 to GiveDirectly, certainly that will affect they recipients strongly, and maybe in expectation prevent 0.1 cases of severe depression.

Actually, if you take your $10.000, and buy that sweet, sweet Rolex with it, there is a tiny chance that this will prevent the jewelry store owner from going bankrupt, being dumped by their partner and, well, developing severe depression. $10.000 to the jeweller prevent an expected 0.0001 cases of severe depression.

So, given your values, you should be indifferent between those.

Even worse, all three actions also harbour tiny chances of causing severe depression. Even the mental health charity, for every 100 patients they prevent from developing depression, will maybe cause depression in 1 patient (because interventions sometimse have adverse effects, ...). So if you decide between burning the money or giving it to the mental health charity, you decide between preventing 100 or 1 episodes of depression. An decision that you are, given your stated values, indifferent between.

Further arguments why approaches that try to avoid interpersonal welfare aggregation fail in the real world can be found here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1781092

Comment author: Jeffhe  (EA Profile) 13 March 2018 11:41:04PM *  0 points [-]

Hi Jan,

Thanks a lot for your response.

I wonder if it is too big of a concession to make to say that "This conclusion seems correct only for clear-cut textbook examples." My argument against effective altruism was an attempt to show that it is theoretically/fundamentally flawed, even if (per your objection) I can't criticize the actual pattern of donation it is responsible for (e.g. pushing a lot of funding to GiveDirectly), although I will offer a response to your objection.

I remember listening to a podcast featuring professor MacAskill (one of the presumed founders of EA) where he was recounting a debate he had with someone (can't remember who). That someone raised (if I remember correctly) the following objection: If there was a burning house and you could either save the boy trapped inside or a painting hanging on the wall which you could sell and use that money to save 100 kids in a third world country from a similar pain that the boy would face, you should obviously save the boy. But EA says to save the painting. Therefore EA is false. Professor MacAskill's response (if I remember correctly) was to bite the bullet and say that while it might be hard to stomach, that is really what we should do.

If professor MacAskill's view represents EA's position, then I assume that if you concede that we should flip a coin in such a case, then there is an issue.

Regarding whether my argument recommends anything in the real world, I think it does.

First, just to be clear, since we cannot give each person a chance of being helped that is proportionate to what they have to suffer, I said that I personally would choose to use my money to help anyone among the class of people who stands to suffer the most (see Section F.). Just to be clear, I wouldn't try to give each of the people among this class an equal chance because that is equally impossible. I would simply choose to help those who I come across or know about I guess. Note that I didn't explain why I would choose to help this class of people, but the reason is simply that were it possible to give each person a chance of being helped proportional to their suffering, those who stand to suffer the most have the highest chance of winning. (I have since updated the post to include this explanation, thanks.)

I think, now that I have clarified my position, it should be clear that my way of things can recommend actions. There are many opportunities where donating almost certainly prevents or alleviates a certain extreme suffering to someone. Maybe depression is not one of those cases, but I would imagine that severe malnutrition is very painful. So is torture (which oftentimes can be prevented if a ransom is paid). Since the pattern of donation that EA promotes is likely very different from the pattern of donation that arises from my way of things, my way of things provides a real alternative practically speaking (but maybe up to a limit before the patterns of donations would converge).

Btw, I would not be absolutely against giving to GiveDirectly if there is a statistically good chance that they will prevent or alleviate at least one person from one of the worst kinds of suffering AND there wasn't any other cheaper practical way to help that very person (which is likely the case because we don't even know who that person is). However, I would personally donate to charities where there is a near certainty of prevention of alleviation simply because at the end of the day my donation actually helped someone, whereas a statistically good chance may not pan out, in which case I haven't helped the worst off

Yes, by doing so, I perhaps end up allowing someone to suffer in one of the worst ways who otherwise wouldn't have suffered had I (and everyone else) given to GiveDirectly. But, as I made more clear in Section F., there is no way to give each person an appropriate chance of being helped, not even if we just considered those people who stand to suffer the worst. And so, at the end of the day, I am forced to make a choice to help a particular person anyways.