0

Jeffhe comments on Is Effective Altruism fundamentally flawed? (Updated on Apr 10) - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (118)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alex_Barry 13 April 2018 09:03:31AM 0 points [-]

are you using "bad" to mean "morally bad?"

Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!)

I think thinking in terms of 'total pain' is not normally how this is approached, instead one thinks about converting each persons experience into 'utility' (or 'moral badness' etc.) on a personal level, but then aggregates all the different personal utilities into a global figure. I don't know if you find this formulation more intuitively acceptable (it is some sense feels like it respects your reason for caring about pain more).

I bring this up since you are approaching this from a different angle than the usual, which makes peoples standard lines of reasoning seem more complex.

A couple of brief points in favour of the classical approach: It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).

I'm not sure I see the advantage here, or what the alleged advantage is. I don't see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.

I'll discuss this in a separate comment since I think it is one of the strongest argument against your position.

I don't know much about the veil of ignorance, so I am happy to give you that it does not support total utilitarianism.

I believe it is not always right to prevent the morally worse case.

Then I am really not sure at all what you are meaning by 'morally worse' (or 'right'!). In light of this, I am now completely unsure of what you have been arguing the entire time.

Comment author: Jeffhe  (EA Profile) 13 April 2018 07:58:31PM *  0 points [-]

Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!)

I think thinking in terms of 'total pain' is not normally how this is approached, instead one thinks about converting each persons experience into 'utility' (or 'moral badness' etc.) on a personal level, but then aggregates all the different personal utilities into a global figure. I don't know if you find this formulation more intuitively acceptable (it is some sense feels like it respects your reason for caring about pain more).

So you're suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:

  1. Assign a moral value to each person's experiences based on its overall what-it's-like. For example, if someone is to experience 5 headaches, we are to assign a single moral value to his 5 headaches based on how experientially bad the what-it's-like-of-going-through-5-headaches is. If going through 5 such headaches is about experientially as bad as going through 1 major headache, then we would assign the same moral value to someone's 5 minor headaches as we would to someone else's 1 major headache.

  2. We then add up the moral value assigned to each person's experiences to get a global moral value, and compare this moral value to the other global values corresponding to the other states of affairs we could bring about.

This approach reminds me of trade-off situations that involve saving lives instead of saving people from suffering. For example, suppose we can either save Amy's and Susie's life or Bob's life, but we cannot save all. Who do we save? Most people would reason that we should save Amy's and Susie's life because each life is assigned a certain positive moral value, so 2 lives is twice the moral value as 1 life. I purposely avoided talking about trade-off situations involving saving lives because I don't think a life has moral value in itself, yet I anticipated that people would appeal to life having some sort of positive moral value in itself and I didn't want to spend time arguing about that. In any case, if life does have positive moral value in itself, then I think it makes sense to add those values just as it makes sense to add the dollar values of different merchandise. This would result in Amy's and Susie's death being a morally worse thing than Bob's death, and so I would at least agree that what we ought to do in this case wouldn't be to give everyone a 50% chance.

In any case, if we assign a moral value to each person's experience in the same way that we might assign a moral value to each person's life, then I can see how people reach the conclusion that more people suffering a given pain is morally worse than fewer people suffering the given pain (even if the fewer are other people). Moreover, given step 1., I agree that this approach, at least prima facie, respects [the fact that pain matters solely because of how it FEELS] more than the approach that I've attributed to kbog). (I added the "[...]" to make the sentence structure more clear.) As such, this is an interesting approach that I would need to think more about, so thanks for bringing it up. But, even granting this approach, I don't think what we ought to do is to OUTRIGHT prevent the morally worse case; rather we ought to give a higher chance to preventing the morally worse case proportional to how much morally worse it is than the other case. I will say more about this below.

Then I am really not sure at all what you are meaning by 'morally worse' (or 'right'!). In light of this, I am now completely unsure of what you have been arguing the entire time.

Please don't be alarmed (haha). I assume you're aware that there are other moral theories that recognize the moral value of experience (just as utilitarianism does), but also recognizes other side constraints such that, on these moral theories, the right thing to do is not always to OUTRIGHT prevent the morally worst consequence. For example, if a side constraint is true of some situation, then the right thing to do would not be to prevent the morally worst consequence if doing so violates the side constraint. That is why these moral theories are not consequentialist.

You can think of my moral position as like one of these non-consequentialist theories. The one and only side constraint that I recognize is captured by the fact that who suffers matters. Interestingly, this side constraint arises from the fact that experience matters, so it is closer to utilitarianism than other moral theories in this respect. Here's an example of the side constraint in action: Suppose I can either save 100 people from a minor headache or 1 other person from a major headache. Going by my sense of "more pain" (i.e. my way of quantifying and comparing pains), the single person suffering the major headache is morally worse than the 100 people each suffering a minor headache because his major headache is experientially worse than any of the other people's minor headache. But in this case, I would not think the right thing to do is to OUTRIGHT save the person with the major headache (even though his suffering is the morally worse case). I would think that the right thing to do is to give him a higher chance of being saved proportional to how much worse his suffering is experientially speaking than any one of the others (i.e. how much morally worse his suffering is relative to the 100's suffering).

Similarly, if we adopted the approach you outlined above, maybe the 100 people each suffering a minor headache would be the morally worse case. If so, given the side constraint, I would still similarly think that it would not be right to OUTRIGHT save the 100 from their minor headaches. I would again think that the right thing to do would be to give the 100 people a higher chance of being saved proportional to how much morally worse their suffering is relative to the single person's suffering.

I hope that helps.

Comment author: Alex_Barry 13 April 2018 09:46:19PM *  0 points [-]

On 'people should have a chance to be helped in proportion to how much we can help them' (versus just always helping whoever we can help the most).

(Again, my preferred usage of 'morally worse/better' is basically defined so as to mean one always 'should' always pick the 'morally best' action. You could do that in this case, by saying cases are morally worse than one another if people do not have chances of being helped in proportion to how badly off they are. This however leads directly into my next point... )

How much would you be willing to trade off helping people verses the help being distributed fairly? e.g. if you could either have a 95% chance of helping people in proportion to their suffering, but a 5% chance of helping no one, verses a 100% chance of only helping the person suffering the most.

In your reply to JanBrauner you are very willing to basically completely sacrifice this principle in response to practical considerations, so it seems possibly you are not willing to trade off any amount of 'actually helping people' in favour of it, but then it seems strange you argue for it so forcefully.

As a separate point, this form of reasoning seems rather incompatible with your claims about 'total pain' being morally important, and also determined solely by whoever is experiencing the most pain. Thus, if you follow your approach and give some chance of helping people not experiencing the most pain, in the case when you do help them, the 'total pain' does not change at all!

For example:

  • Suppose Alice is experiencing 10 units of suffering (by some common metric)
  • 10n people (call them group B) are experiencing 1 units of suffering each
  • We can help exactly one person, and reduce their suffering to 0

In this case your principle says we should give Alice a 10/(10+10n) = 1/(n+1) chance of being helped, and each person in group B a 1/(10+10n) chance of being helped. But in the case we help someone from group B the level of 'total pain' remains at 10 as Alice is not helped.

This means that n/(n+1) proportion of the time the 'total pain' remains unchanged. i.e. we can make the chance of actually effecting the thing you say is morally important arbitrarily small. It seems strange to say your morally is motivated by x if your actions are so distanced from it that your chance of actually effecting x can go to zero.

Finally I find the claim that this is actually the fairer or more empathetic approach unconvincing. I would argue that whatever fairness you gain by letting there be some chance you help the person experiencing the second-most suffering is outweighed by your unfairness to the person suffering the most.

Indeed, for another example:

  • Say a child (child A) is about to be tortured for the rest of their life, which you can prevent for £2.
  • However another child (child B) has just dropped their ice cream, which has slightly upset them (although not much, they are just a little sad). You could buy them another ice cream for £2, which would cheer them up.

You only have £2, so you can only help one of the children. Under your system you would give some (admittedly (hopefully!) very small) chance that you would help child B. However in the case that you rolled your 3^^^3 sided die and it come up in favour of B, as you started walking over to the ice cream van it seems like it would be hard to say you were acting in accordance with "reason and empathy".

(This was perhaps a needlessly emotive example, but I wanted to hammer home how completely terrible it could be to help the person not suffering the most. If you have a choice between not rolling a die, and rolling a die with a chance of terrible consequences, why take the chance?)

Comment author: Alex_Barry 13 April 2018 08:43:32PM *  0 points [-]

So you're suggesting that most people aggregate different people's experiences as follows:

Well most EAs, probably not most people :P

But yes, I think most EAs apply this 'merchandise' approach weighed by conscious experience.

In regards to your discussion of moral theories, side constraints: I know there are a range of moral theories that can have rules etc. My objection was that if you were not in fact arguing that total pain (or whatever) is the sole determiner of what action is right then you should make this clear from the start (and ideally baked into what you mean by 'morally worse').

Basically I think sentences like:

"I don't think what we ought to do is to OUTRIGHT prevent the morally worse case"

are sufficiently far from standard usage (at least in EA circles) you should flag up that you are using 'morally worse' in a nonstandard way (and possibly use a different term). I have the intuition that if you say "X is the morally relevant factor" then which actions you say are right will depend solely on how they effect X.

Hence if you say 'what is morally relevant is the maximal pain being experienced by someone' when I expect all I need to tell you abut actions for you to decide between them is how they effect the maximal pain being experienced by someone.

Obviously language is flexible but I think if you deviate from this without clear disclaimers it is liable to cause confusion. (Again, at least in EA circles).

I think your argument that people should have a chance to be helped in proportion to how much we could help them is completely separate from your point about Comparability, and we should keep the discussions separate to avoid the chance of confusion. I'll make a separate comment to discuss it,

Comment author: Jeffhe  (EA Profile) 13 April 2018 10:01:35PM *  0 points [-]

So you're suggesting that most people aggregate different people's experiences as follows:

FYI, I have since reworded this as "So you're suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:"

I think it is a more precise formulation. In any case, we're on the same page.

Basically I think sentences like:

"I don't think what we ought to do is to OUTRIGHT prevent the morally worse case"

are sufficiently far from standard usage (at least in EA circles) you should flag up that you are using 'morally worse' in a nonstandard way (and possibly use a different term). I have the intuition that if you say "X is the morally relevant factor" then which actions you say are right will depend solely on how they effect X.

The way I phrased Objection 1 was as follows: "One might reply that two instances of suffering is morally worse than one instance of the same kind of suffering and that we should prevent the morally worse case (e.g., the two instances of suffering), so we should help Amy and Susie."

Notice that this objection in argument form is as follows:

P1) Two people suffering a given pain is morally worse than one other person suffering the given pain.

P2) We ought to prevent the morally worst case.

C) Therefore, we should help Amy and Susie over Bob.

My argument with kbog concerns P1). As I mentioned, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.

Given this premise, I've been arguing that two people suffering a given pain does not involve more pain than one person suffering the given pain, and thus P1) is false. And kbog has been arguing that two people suffering a given pain does involve more pain than one person suffering the given pain, and thus P1) is true. Of course, both of us are right on our respective preferred sense of "involves more pain than". So I recently started arguing that my sense is the sense that really matters.

Anyways, notice that P2) has not been debated. I understand that consequentialists would accept P2). But for other moral theorists, they would not because not all things that they take to matter (i.e. to be morally relevant, to have moral value, etc) can be baked into/captured by the moral worseness/goodness of a state of affairs. Thus, it seems natural for them to talk of side constraints, etc. For me, two things matter: experience matters, and who suffers it matters. I think the latter morally relevant thing is best captured as a side constraint.

However, you are right that I should make this aspect of my work more clear.

Comment author: Alex_Barry 13 April 2018 10:14:03PM *  0 points [-]

Some of your quotes are broken in your comment, you need a > for each paragraph (and two >s for double quotes etc.)

I know for most of your post you were arguing with standard definitions, but that made it all the more jarring when you switched!

I actually think most (maybe all?) moral theories can be baked into goodness/badness of sates of affairs. If you want incorporate a side-constraint you can just define any state of affairs in which you violate that constraint as being worse than all other states of affairs. I do agree this can be less natural, but the formulations are not incompatible.

In any case as I have given you plenty of other comment threads to think about I am happy to leave this one here - my point was just a call for clarity.

Comment author: Jeffhe  (EA Profile) 13 April 2018 11:51:40PM *  1 point [-]

I certainly did not mean to cause confusion, and I apologize for wasting any of your time that you spent trying to make sense of things.

By "you switched", do you mean that in my response to Objection 1, I gave the impression that only experience matters to me, such that when I mentioned in my response to Objection 2 that who suffers matters to me too, it seems like I've switched?

And thanks, I have fixed the broken quote. Btw, do you know how to italicize words?

Comment author: Alex_Barry 14 April 2018 07:54:28AM *  0 points [-]

Yes, "switched" was a bit strong, I meant that by default people will assume a standard usage, so if you only reveal later that actually you are using a non-standard definition people will be surprised. I guess despite your response to Objection 2 I was unsure in this case whether you were arguing in terms of (what are at least to me) conventional definitions or not, and I had assumed you were.

To italicize works puts *s on either side, like *this* (when you are replying to a comment there is a 'show help' button that explains some of these things.)