0

Is Effective Altruism fundamentally flawed?

Update on Mar 21: I have completely reworked my response to Objection 1 to make it more convincing to some and hopefully more clear. I would also like to thank everyone who has responded thus far, in particular brianwang712, Michael_S, kbog and Telofy for sustained and helpful discussions.

Update on Apr 10: I have added a new objection (Objection 1.1) that captures an objection that kbog and Michael_S have raised to my response to Objection 1.  I'd also like to thank Alex_Barry for a sustained and helpful discussion.

Update on Apr 24: I have removed Objection 1.1 temporarily. It is undergoing revision to be more clear.  

 

Hey everyone,

This post is perhaps unlike most on this forum in that it questions the validity of effective altruism rather than assumes it.

A. Some background:

I first heard about effective altruism when professor Singer gave a talk on it at my university a few years ago while I was an undergrad. I was intrigued by the idea. At the time, I had already decided that I would donate the vast majority of my future income to charity because I thought that preventing and/or alleviating the intense suffering of others is a much better use of my money than spending it on personal luxuries. However, the idea of donating my money to effective charities was a new one to me. So, I considered effective altruism for some time, but soon I came to see a problem with it that to this day I cannot resolve. And so I am not an effective altruist (yet).

Right now, my stance is that the problem I've identified is a very real problem. However, given that so many intelligent people endorse effective altruism, there is a good chance I have gone wrong somewhere. I just can’t see where. I'm currently working on a donation plan and completing the plan requires assessing the merits of effective altruism. Thus, I would greatly appreciate your feedback. 

Below, I state the problem I see with effective altruism, some likely objections and my responses to those objections.

Thanks in advance for reading! 

 

B. The problem I see with effective altruism:

Suppose we find ourselves in the following choice situation: With our last $10, we can either help Bob avoid an extremely painful disease by donating our $10 to a charity working in his area, or we can help Amy and Susie each avoid an equally painful disease by donating our $10 to a more effective charity working in their area, but we cannot help all three. Who should we help?

Effective altruism would say that we should help the group consisting of Amy and Susie since that is the more effective use of our $10. Insofar as effective altruism says this, it effectively denies Bob (and anyone else in his place) any chance of being helped. But that seems counter to what reason and empathy would lead me to do.

Yes, Susie and Amy are two people, and two is more than one, but were they to suffer (as would happen if we chose to help Bob), it is not like any one of them would suffer more than what Bob would otherwise suffer. Indeed, were Bob to suffer, he would suffer no less than either Amy or Susie. Susie’s suffering would be felt by Susie alone. Amy’s suffering would be felt by Amy alone. And neither of their suffering would be greater than Bob’s suffering. So why simply help them over Bob rather than give all of them an equal chance of being helped by, say, tossing a coin? (footnote 1)

Footnote 1: A philosopher named John Taurek first discussed this problem and proposed this solution in his paper "Should the Numbers Count?" (1977) 

 

C. Some likely objections and my responses:

Objection 1:

One might reply that two instances of suffering is morally worse than one instance of the same kind of suffering and that we should prevent the morally worse case (e.g., the two instances of suffering), so we should help Amy and Susie.

My response:

I don’t think two instances of suffering, spread across two people (e.g. Amy and Susie), is a morally worse case than one instance of the same kind of suffering had by one other person (e.g. Bob). I think these two cases are just as bad, morally speaking. Why’s that? Well, first of all, what makes one case morally worse than another? Answer: Morally relevant factors (i.e. things of moral significance, things that matter). Ok, and what morally relevant factors are present here? Well, experience is certainly one - in particular the severe pain that either Bob would feel or Susie and Amy would each feel, if not helped (footnote 2). Ok. So we can say that a case in which Amy and Susie would each suffer said pain is morally worse than a case in which only Bob would suffer said pain just in case there would be more pain or greater pain in the former case than in the latter case (i.e. iff Amy’s pain and Susie’s pain would together be experientially worse than Bob’s pain.)

Footnote 2: In my response to Objection 2, it will become clear that I think something else matters too: the identity of the sufferer. In other words, I don't just think suffering matters, I also think who suffers it matters. However, unlike the morally relevant factor of suffering, I don't think it's helpful for our understanding to understand this second morally relevant factor as having an effect on the moral worseness of a case, although one could understand it this way. Rather, I think its better for our understanding to accommodate its force via the denial that we should always prevent the morally worst case (i.e. the case involving the most suffering). If you find this result deeply unintuitive, then maybe its better for your understanding to understand this second morally relevant factor as having an effect on the moral worseness of a case, which allows you to say that what we should always do is prevent the morally worse case. In any case, ignore the morally relevant factor of identity for now as I haven't even argued for why it is morally relevant. 

Here, it's helpful to keep in mind that more/greater instances of pain does not necessarily mean more/greater pain. For example, 2 very minor headaches is more instances of pains than 1 major headache, but they need not involve more pain than a major headache (i.e., they need not be experientially worse than a major headache). Thus, while there would clearly be more instances of pain in the former case than in the latter case (i.e. 2 vs 1; Amy's and Susie's vs Bob's), that does not necessarily mean that there would be more pain. 

So the key question for us then is this: Are 2 instances of a given pain, spread across two people (e.g. Amy and Susie), experientially worse (i.e. do they involve more/greater pain) than one instance of the same pain had by one person (e.g. Bob)? If they are (call this thesis “Y”), then a case in which Amy and Susie would each suffer a given pain is morally worse than a case in which only Bob would suffer the given pain. If they aren’t (call this thesis “N”), then the two cases are morally just as bad, in which case Objection 1 would fail, even if we agreed that we should prevent the morally worse case.

Here’s my argument against Y:

Suppose that 5 instances of a certain minor headache, all experienced by one person, are experientially worse than a certain major headache experienced by one person. That is, suppose that any person in the world who has an accurate idea/appreciation of what 5 instances of this certain minor headache feels like and of what this certain major headache feels like would prefer to endure the major headache over the 5 minor headaches if put to the choice. Under this supposition, someone who holds Y must also hold that 5 minor headaches, spread across 5 people, are experientially worse than a major headache had by one person. Why? Because, at bottom, someone who holds Y must also hold that 5 minor headaches spread across 5 people are experientially just as bad as 5 minor headaches all had by one person.

So let's assess whether 5 minor headaches, spread across 5 people, really are experientially worse than a major headache had by one person. Given the supposition above, consider first what makes a single person who suffers 5 minor headaches experientially worse off than a person who suffers just 1 major headache, other things being equal.

Well, imagine that we were this person who suffers 5 minor headaches. We suffer one minor headache one day, suffer another minor headache sometime after that, then another after that, etc. By the end of our 5th minor headache, we will have experienced what it’s like to go through 5 minor headaches. After all, we went through 5 minor headaches! Note that the what-it’s-like-of-going-through-5-headaches consists simply in the what-it’s-like-of-going-through-the-first-minor-headache then the what-it’s-like-of-going-through-the-second-minor-headache  then the what-it’s-like-of-going-through-the-third-minor-headache, etc. Importantly, the what-it’s-like-of-going-through-5-headaches is not whatever we experience right after having our 5th headache (e.g. exhaustion that might set in after going through many headaches or some super painful headache that is the "synthesis" of the intensity of the past 5 minor headaches). It is not a singular/continuous feeling like the feeling we have when we're experiencing a normal pain episode. It is simply this: the what-it’s-like of going through one minor headache, then another (some time later), then another, then another, then another. Nothing more. Nothing less.

Now, by the end of our 5th minor headache, we might have long forgotten about the first minor headache because, say, it happened so long ago. So, by the end of our 5th minor headache, we might not have an accurate appreciation of what it’s like to go through 5 minor headaches even though we in fact have experienced what it’s like to go through 5 minor headaches. As a result, if someone asked us whether we’ve been through more pain due to our minor headaches or more pain through a major headache that, say, we recently experienced, we would likely incorrectly answer the latter.

But, if we did have an accurate appreciation of what it’s like to go through 5 minor headaches, say, because we experienced all 5 minor headaches rather recently, then there will be a clear sense to us that going through them was experientially worse than the major headache. The 5 minor headaches would each be “fresh in our mind”, and thus the what-it’s-like-of-going-through-5-minor-headaches would be “fresh in our mind”. And with that what-it’s-like fresh in mind, it seems clear to us that it caused us more pain than the major headache did.

Now, a headache being “fresh in our mind” does not mean that the headache needs to be so fresh that it is qualitatively the same as experiencing a real headache. Being fresh in our mind just means we have an accurate appreciation/idea of what it feels like, just as we have some accurate idea of what our favorite dish tastes like.

Because we have appreciations of our past pains (to varying degrees of accuracy), we sometimes compare them and have a clear sense that one set of pains is worse than another. But it is not the comparison and the clear sense we have of one set of pains being worse than another that ultimately makes one set of pains worse than another. Rather, it is the other way around: it is the what-it’s-like-of-having-5-minor-headaches that is worse than the what-it’s-like-of-having-a-major-headache. And if we have an accurate appreciation of both what-it’s-likes, then we will conclude the same. But, when we don’t, then our own conclusions could be wrong, like in the example provided earlier of a forgotten minor headache.

So, at the end of the day, what makes a person who has 5 minor headaches worse off than a person who has 1 major headache is the fact that he experienced the what-it’s-like-of-going-through-5-minor-headaches. 

But, in the case where the 5 minor headaches are spread across 5 people, there is no longer the what-it’s-like-of-going-through-5-minor-headaches because each of the 5 headaches is experienced by a different person. As a result, the only what-it’s-like that is present is the what-it’s-like-of-experiencing-one-minor-headache. Five different people each experience this what-it’s-like, but no one experiences what-it’s-like-of-going-through-5-minor-headaches. Moreover, the what-it’s-like of each of the 5 people cannot be linked to form the what-it’s-like-of-experiencing-5-minor-headaches because the 5 people are experientially independent beings.

Now, it's clearly the case that the what-it’s-like-of-going-through-1-minor-headache is not experientially worse than the what-it’s-like-of-going-through-a-major-headache. Given what I said in the previous paragraph, therefore, there is nothing present that could be experientially worse than the what-it’s-like-to-go-through-a-major-headache in the case where the 5 minor headaches are spread across 5 people. Therefore, 5 minor headaches, spread across 5 people, cannot be (and thus, is not) worse, experientially speaking, than one major headache.

Indeed, five independent what-it's-likes-of-going-through-1-minor-headache is very different from a single what-it's-like-of-going-through-5-minor-headaches. And given a moment's reflection, one thing should be clear: only the latter what-it's-like can plausibly be experientially worse than a major headache. 

Thus, one should not treat 5 minor headaches spread across 5 people as being experientially just as bad as 5 minor headaches all had by 1 person. The latter is experientially worse than the former. The latter involves more/greater pain. 

We can thus make the following argument against Y:

P1) If Y is true, then 5 minor headaches spread across 5 people is experientially just as bad 5 minor headaches all had by 1 person.

P2) But that is not the case (since 5 minor headaches all had by 1 person is experientially worse than 5 minor headaches spread across 5 people).

C) Therefore Y is false. And therefore Objection 1 fails, even if it's granted that we should prevent the morally worse case.

Objection 1.1: (Improving it)

Objection 1.2:

One might reply that experience is a morally relevant factor, but when the amount of pain in each case is the same (i.e. when the cases are experientially just as bad), the number of people in each case also becomes a morally relevant factor. Since the case in which Amy and Susie would each suffer involves more people, therefore, it is still the morally worse case. 

My response:

I will respond to this objection in my response to Objection 2.

Objection 1.3:

One might reply that the number of people involved in each case is a morally relevant factor in of itself (i.e. completely independent of the amount of pain in each case). That is, one might say that the inherent moral relevance of the number of people involved in each case must be reconciled with the inherent moral relevance of the amount of pain in each case, and that therefore, in principle, a case in which many people would each suffer a relatively lesser pain can be morally worse than a case in which one other person would suffer a relatively greater pain, so long as there are enough people on the side of the many. For example, between helping a million people avoid depression or one other person avoid very severe depression, one might have the intuition that we should help the million, i.e. that a case in which a million people would suffer depression is morally worse.  

My response:

I don’t deny that many people have this intuition, but I think this intuition is based on a failure to recognize and/or appreciate some important facts. In particular, I think that if you really kept in the forefront of your mind the fact that not one of the million would suffer worse than the one, and the fact that the million of them together would not suffer worse than the one (assuming my response to Objection 1 succeeds), then your intuition would not be as it is (footnote 3).

Nevertheless, you might still feel that the million people should still have a chance of being helped. I agree, but this is not because of the sheer number of them involved. Rather, it is because which individual suffers matters. (Please see my response to Objection 2.)

Footnote 3: For those familiar with Derk Pereboom’s position in the free will debate, he makes an analogous point. He doesn’t think we have free will, but admits that many have the intuition that we do. But he points out that this is because we are generally not aware of the deterministic psychological/neurological/physical causes of our actions. But once we become aware of them – once we have them in the forefront of our minds – our intuition would not be that we are free. See pg 95 of “Free Will, Agency, and Meaning in Life” (Pereboom, 2014)

 

Objection 2:

One might reply that we should help Amy and Susie because either of their suffering neutralizes/cancels out Bob’s suffering, leaving the other’s suffering to carry the day in favor of helping them over Bob.

My response:

I don’t think one person’s suffering can neutralize/cancel out another person’s suffering because who suffers matters. Which individual it is that suffers matters because it is the sufferer who bears the complete burden of the suffering. It is the particular person who ends up suffering that feels all the suffering. This is an obvious fact, but it is also a very significant fact when properly appreciated, and I don’t think it is properly appreciated.

Think about it. The particular person(s) who suffers has to bear everything. If we save Amy and Susie, it is Bobthat particular vantage point on the world - who has to feel all of the suffering (which it bears remembering is suffering that would be no less painful than the suffering Amy and Susie would each otherwise endure). The same, of course, is true of each of Amy and Susie were we to save Bob.

I fear that saying anymore might make the significance of the fact I’m pointing to less clear. For those who appreciate the significance of what I’m getting at, it should be clear that neither Amy’s or Susie’s suffering can be used to neutralize/cancel out Bob’s suffering and vice versa. Yes, it’s the same kind of suffering, but it’s importantly different whether Amy and Susie each experiences it or Bob experiences it, because again, whoever experiences it is the one who has to bear all of it.

Notice that this response to objection 2 is importantly compatible with empathizing with every individual involved (e.g., Amy, Susie and Bob). Indeed, to empathize with only select individuals is biased. Yet, it seems to me that many people are in fact likely to forget to empathize with the group containing the fewer number. Note that as I understand it, to empathize with someone is to imagine oneself in their shoes and to care about that imagined perspective.

Also, notice that this response to objection 2 also deals with Objection 1.2 since this response argues against (what seems to me) the only plausible way in which the number of people involved might be thought to be relevant when the amount of pain involved in each case is the same: when the amount of pain involved in each case is the same, it might be thought that one person's pain can neutralize or cancel out another person's pain, e.g. that the suffering Amy would feel can neutralize or cancel out the suffering Bob would feel, leaving only the suffering that Susie would feel left in play, and that therefore the case in which Amy and Susie would suffer is morally worse than the case in which Bob would suffer. But if my response to Objection 2 is right, then this thought is wrong.

Just to be clear, this is not to say that I think one person’s suffering can not balance (or, in the case of greater suffering, outweigh) another person’s equal (or lesser) suffering such that the reasonable and empathetic thing to do is to give the person who would face the greater suffering a higher chance of being helped. In fact, I think it can. But balancing is not the same as neutralizing/canceling out. Bob’s suffering balances out Amy’s suffering and it also independently balances out Susie’s suffering precisely because Bob’s suffering does not get neutralized/cancelled out by either of their suffering. 

My own view is that we should give the person who would face the greater suffering a higher chance of being saved in proportion to how much greater his suffering would be relative to the suffering that the other person(s) would each otherwise face. We shouldn't automatically help him just because he would face a greater suffering if not helped. After all, who suffers matters, and this includes those who would be faced with the lesser suffering if not helped (footnote 4).

Footnote 4: My own view is slightly more complicated than this, but those details aren't important given the simple sorts of choice situations discussed in this essay.

Going back to Objection 1.3, this then explains why I agree that we should still give those who would each suffer a less serious depression a chance of being helped, even though the one other person would suffer more if not saved. Importantly, the number of people who would each suffer the less serious depression is irrelevant. I would give them a chance of being saved whether they are 2 persons or a million or a billion. How high of a chance would I give them? In proportion to how their depression compares in suffering to the single person’s severe depression. So, if it involves slightly less suffering, I would give them around 48% of being helped. If it involves a lot less suffering, then I would give them lot lower of a chance (footnote 5).

Footnote 5: Notice that with certain types of pain episodes, such as a torture episode vs a minor headache, there is such a big gap in amount of suffering between them that any clear-headed person in the world would rather endure an infinite number of minor headaches (i.e. live with very frequent minor headaches in an immortal life) than to endure the torture episode. This would explain why in a choice situation in which we can either save a person from torture or x number of persons from a minor headache (or 1 person from x minor headaches), we would just save the person who would be tortured rather than give the other(s) even the slightest chance of being helped. And I think this accords with our intuition well.

 

Objection 3:

One might reply that from “the perspective of the universe” or “moral perspective” or “objective perspective”, either of their suffering neutralizes/cancels out Bob’s suffering, leaving the other’s suffering to carry the day in favor of helping them over Bob.

My response:

As I understand it, the perspective of the universe is the impartial or unbiased perspective where personal biases are excluded from consideration. As a result, such a perspective entails that we should give equal weight to equal suffering. For example, whereas I would give more weight to my own suffering than to the equal suffering of others (due to the personal bias involved in my everyday personal perspective), if I took on the perspective of the universe, I would have to at least intellectually admit that their equal suffering matters the same amount as mine. Of course, it doesn’t matter the same amount as mine from my perspective. It matters the same amount as mine from the perspective of the universe that I have taken on. We might say it matters the same amount as mine period. However, none of this entails that, from the perspective of the universe, which individual suffers doesn’t matter – that whether it is I who suffers X or someone else who suffers X doesn’t matter. Clearly it does matter for the reason I gave earlier. Giving equal weight to equal suffering does not entail that who suffers said suffering doesn’t matter. It is precisely because it matters that in a choice situation in which we can either save person A from suffering X or person B from suffering X we think we should flip a coin to give each an equal chance of being saved, rather than, say, choosing one of them to save on a whim. This is our way of acknowledging that A suffering is importantly different from B suffering -  that who suffers matters.

Even if I'm technically wrong about what the perspective of the universe - as understood by utilitarians - amounts to, all that shows is that the perspective of the universe, so understood, is not the moral perspective. For who suffers matters (assuming my response to Objection 2 is correct), and so the moral perspective must be one from which this fact is acknowledged. Any perspective from which it isn't therefore cannot be the moral perspective. 

  

D. Conclusion:

I therefore think that according to reason and empathy, Bob should be accorded an equal chance to be helped (say via flipping a coin) as Amy and Susie. This conclusion holds regardless of the number of people that are added to Amy and Susie’s group as long as the kind of suffering remains the same. So for example, if with a $X donation we can either help Bob avoid an extremely painful disease or a million other people from the same painful disease, but not all, reason and empathy would say to flip a coin – a conclusion that is surely against effective altruism.

 

E. One final objection:

One might say that this conclusion is too counter-intuitive to be correct, and that therefore something must have gone wrong in my reasoning, even though it may not be clear what that something is.

My response:

But is it really all that counter-intuitive when we bear in mind all that I have said? Importantly, let us bear in mind three facts:

1) Were we to save the million people instead of Bob, Bob would suffer in a way that is no less painful than any one of the million others otherwise would. Indeed, he would suffer in a way that is just as painful as any one among the million. Conversely, were we to save Bob, no one among the million suffering would suffer in a way that is more painful than Bob would otherwise suffer. Indeed, the most any one of them would suffer is the same as what Bob would otherwise suffer.

2) The suffering of the million would involve no more pain than the pain Bob would feel (assuming my response to Objection 1 is correct). That is, a million instances of the given painful disease, spread across a million people, would not be experientially worse - would not involve more pain or greater pain - than one instance of the same painful disease had by Bob. (Again, keep in mind that more/greater instances of a pain does not necessarily mean more/greater pain.)

3) Were we to save the million and let Bob suffer, it is he – not you, not me, and certainly not the million of others – who has to bear that pain. It is that particular person, that unique sentient perspective on the world who has to bear it all.

In such a choice situation, reason and empathy tells me to give him an equal chance to be saved. To just save the millions seems to me to completely neglect what Bob has to suffer, whereas my approach seems to neglect no one.

Comments (124)

Comment author: brianwang712 13 March 2018 02:27:53PM 10 points [-]

One additional objection that one might have is that if Bob, Susie, and Amy all knew beforehand that you would end up in a situation where you could donate $10 to alleviate either two of them suffering or one of them suffering, but they didn't know beforehand which two people would be pitted against which one person (e.g., it could just as easily be alleviating Bob + Susie's suffering vs. alleviating Amy's suffering, or Bob + Amy's suffering vs. Susie's suffering, etc.), then they would all sign an agreement directing you to send a donation such that you would alleviate two people's suffering rather than one, since this would give each of them the best chance of having their suffering alleviated. This is related to Rawls' veil of ignorance argument.

And if Bob, Susie, Amy, and a million others were to sign an agreement directing your choice to donate $X to alleviate one person's suffering or a million peoples' suffering, again all of them behind a veil of ignorance, none of them would hesitate for a second to sign an agreement that said, "Please donate such that you would alleviate a million people's suffering, and please oh please don't just flip a coin."

More broadly speaking, given that we live in a world where people have competing interests, we have to find a way to effectively cooperate such that we don't constantly end up in the defect-defect corner of the Prisoner's Dilemma. In the real world, such cooperation is hard; but in an ideal world, such cooperation would essentially look like people coming together to sign agreements behind a veil of ignorance (not necessarily literally, but at least people acting as if they had done so). And the upshot of such signed agreements is generally to make the interpersonal-welfare-aggregative judgments of the type "alleviating two people's suffering is better than one", even if everyone agrees with the theoretical arguments that the suffering of two people on opposite sides don't literally cancel out, and that who's suffering matters.

Bob, Susie, Amy, and the rest of us all want to live in a world where we cooperate, and therefore we'd all want to live in a world where we make these kinds of interpersonal welfare aggregations, at the very least during the kinds of donation decisions in your thought experiments.

(For a much longer explanation of this line of reasoning, see this Scott Alexander post: http://slatestarcodex.com/2014/08/24/the-invisible-nation-reconciling-utilitarianism-and-contractualism/)

Comment author: Jeffhe  (EA Profile) 13 March 2018 10:03:42PM *  1 point [-]

Hi Brian,

Thanks for your comment and for reading my post!

Here's my response:

Bob, Susie and Amy would sign the agreement to save the greater number if they assumed that they each had an equal chance of being in any of their positions. But, is this assumption true? For example, is it actually the case that Bob had an equal chance to be in Amy's or Susie's position? If it is the case, then saving the greater number would in effect give each of them a 2/3 chance of being saved (the best chance as you rightly noted). But if it isn't, then why should an agreement based on a false assumption have any force? Suppose Bob, in actuality, had no chance of being in Amy's or Susie's position, then is it really in accordance with reason and empathy to save Amy and Susie and give Bob zero chance?

Intuitively, for Bob to have had an equal chance of being in Amy's position or Susie's position or his actual position, he must have had an equal chance of living Amy's life or Susie's life or his actual life. That's how I intuitively understand a position: as a life position. To occupy someone's position is to be in their life circumstances - to have their life. So understood, what would it take for Bob to have had an equal chance of being in Amy's position or Susie's position or his own? Presumably, it would have had to be the case that Bob was just as likely to have been born to Amy's parents or Susie's parents or his actual parents. But this seems very unlikely because the particular “subject-of-experience” or “self” that each of us are is probably biologically linked to our ACTUAL parents' cells. Thus another parent could not give birth to us, even though they might give birth to a subjective-of-experience that is qualitatively very similar to us (i.e. same personality, same skin complexion, etc).

Of course, being in someone's position need not be understood in this demanding (though intuitive) way. For example, maybe to be in Amy's position just requires being in her actual location with her actual disease, but not e.g. being of the same sex as her or having her personality. But insofar as we are biologically linked to our actual parents, and parents are spread all over the world, the odds of Bob having had an equal chance of being in his actual position (i.e. a certain location with a certain disease) or in Amy's position (i.e. a different location with an equally painful disease) is highly unlikely. Think also about all the biological/personality traits that make a person more or less likely to be in a given position. I, for example, certainly had zero chance of being in an NBA position, given my height. Of course, as we change in various ways, our chances to be in certain positions change too, but even so, it is extremely unlikely that any given person, at any given point in time, had an equal chance of being in any of the positions of a trade off situation that he is later to be involved in.

UPDATE (ADDED ON MAR 18): I have added the above two paragraphs to help first-time readers better understand how I understand "being in someone's position" and why I think it is most unlikely that Bob actually had an equal chance of being in Amy's or Susie's position. These two paragraphs have replaced a much briefer paragraph, which you can find at the end of this reply. UPDATE (ADDED ON MAR 21): Also, no need to read past this point since someone (kbog) made me realize that the question I ask in the paragraph below rests on a misunderstanding of the veil-of-ignorance approach.

Also, what would the implications of this objection be for cases where the pains involved in a choice situation are unequal? Presumably, EA favors saving a billion people each from a fairly painful disease than a single person from the excruciating pain of being burned alive. But is it clear that someone behind the veil of ignorance would accept this?

-

Original paragraph that was replaced: "Similarly, is it actually the case that each of us had an equal chance of being in any one of our positions? I think the answer is probably no because the particular “subject-of-experience” or “self” that each of us are is probably linked to our parents' cells."

Comment author: brianwang712 14 March 2018 05:22:03AM *  3 points [-]

I do think Bob has an equal chance to be in Amy's or Susie's position, at least from his point of view behind the veil of ignorance. Behind the veil of ignorance, Bob, Susie, and Amy don't know any of their personal characteristics. They might know some general things about the world, like that there is this painful disease X that some people get, and there is this other equally painful disease Y that the same number of people get, and that a $10 donation to a charity can in general cure two people with disease Y or one person with disease X. But they don't know anything about their own propensities to get disease X or disease Y. Given this state of knowledge, Bob, Susie, and Amy all have the same chance as each other of getting disease X vs. disease Y, and so signing the agreement is rational. Note that it doesn't have to be actually true that Bob has an equal chance as Susie and Amy to have disease X vs. disease Y; maybe a third party, not behind the veil of ignorance, can see that Bob's genetics predispose him to disease X, and so he shouldn't sign the agreement. But Bob doesn't know that; all that is required for this argument to work is that Bob, Susie, and Amy all have the same subjective probability of ending up with disease X vs. disease Y, viewing from behind the veil of ignorance.

Regarding your second point, I don't think EA's are necessarily committed to saving a billion people each from a fairly painful disease vs. a single person being burned alive. That would of course depend on how painful the disease is, vs. how painful being burned alive is. To take the extreme cases, if the painful disease were like being burned alive, except just with 1% less suffering, then I think everybody would sign the contract to save the billion people suffering from the painful disease; if the disease were rather just like getting a dust speck in your eye once in your life, then probably everyone would sign the contract to save the one person being burned alive. People's intuitions would start to differ with more middle-of-the-road painful diseases, but I think EA is a big enough tent to accommodate all those intuitions. You don't have to think interpersonal welfare aggregation is exactly the same as intrapersonal welfare aggregation to be an EA, as long as you think there is some reasonable way of adjudicating between the interests of different numbers of people suffering different amounts of pain.

Comment author: Jeffhe  (EA Profile) 14 March 2018 08:24:46PM *  0 points [-]

It would be a mistake to conclude, from a lack of knowledge about one's position, that one has an equal chance of being in any one's position. Of course, if a person is behind the veil of ignorance and thus lacks relevant knowledge about his/her position, it might SEEM to him/her that he has an equal chance of being in any one's position, and he/she might thereby be led to make this mistake and consequently choose to save the greater number.

In any case, what I just said doesn't really matter because you go on to say,

"Note that it doesn't have to be actually true that Bob has an equal chance as Susie and Amy to have disease X vs. disease Y; maybe a third party, not behind the veil of ignorance, can see that Bob's genetics predispose him to disease X, and so he shouldn't sign the agreement. But Bob doesn't know that; all that is required for this argument to work is that Bob, Susie, and Amy all have the same subjective probability of ending up with disease X vs. disease Y, viewing from behind the veil of ignorance."

Let us then suppose that Bob, in fact, had no chance of being in either Amy's or Susie's position. Now imagine Bob asks you why you are choosing to save Amy and Susie and giving him no chance at all, and you reply, "Look, Bob, I wished I could help you too but I can't help all. And the reason I'm not giving you any chance is that if you, Amy and Susie were all behind the veil of ignorance and was led to assume that each of you had an equal chance of being in anyone else's position, then all of you (including you, Bob) would have agreed to the principle of saving the greater number in the kind of case you find yourself in now."

Don't you think Bob can reasonably reply, "But Brian, whether or not I make that assumption under the veil of ignorance is irrelevant. The fact of the matter is that I had no chance of being in Amy's or Susie's position. What you should do shouldn't be based on what I would agree to in a condition where I'm imagined as making a false assumption. What you should do should be based on my actual chance of being in Amy's or Susie's position. It should be based on the facts, and the fact is that I NEVER had a chance to be in any of their positions. Look, Brian, I'm really scared. I'm going to suffer a lot if you choose to save Amy and Susie - no less than any one of them would suffer. I can imagine that they must be very scared too, for each of them would suffer just as much as me were you to save me instead. In this case, seeing that we each have the same amount to suffer, shouldn't you give each of us an equal chance of being helped, or at least give me some chance and not 0?"

How would you reply? I personally think that Bob's reply shows the clear limits of this hypothetical contractual approach to determining what we should do in real life.

UPDATE (ADDED ON MAR 21): No need to read past this point since another person (kbog) made me realize that the paragraph below rests on a misunderstanding of the veil-of-ignorance approach.

Regarding the second point, I think what any person would agree to behind the veil of ignorance (even assuming the truth of the assumption that each has an equal chance of being in anybody's position) is highly dependent on their risk-adverseness to the severest potential pain. Towards the extreme ends that you described, people of varying risk-adverseness would perhaps be able to form a consensus. But it gets less clear as we consider "middle-of-the-road" cases. As you said people's intuitions here start to differ (which I would peg to varying degrees of risk-adverseness to the severest potential pain). But the question then is whether this hypothetical contractual approach can serve as a “reasonable way of adjudicating between the interests of different numbers of people suffering different amounts of pain” since your intuition might not be the same as the person whose fate might rest in your hands. Is it really reasonable to decide his fate using your intuition and not his?

Comment author: kbog  (EA Profile) 19 March 2018 09:24:29PM *  1 point [-]

It would be a mistake to conclude, from a lack of knowledge about one's position, that one has an equal chance of being in any one's position

It's a stipulation of the Original Position, whether you look at Rawls' formulation or Harsanyi's. It's not up for debate.

Comment author: Jeffhe  (EA Profile) 19 March 2018 10:24:06PM *  0 points [-]

Hey kbog,

Thanks for your comment. I never said it was up for debate. Rather, given that it is stipulated, I question whether agreements reached under such stipulations have any force or validity on reality, given that the stipulation is, in fact, false.

Please read my second response to brianwang712 where I imagine that Bob has a conversation with him. I would be curious how you would respond to Bob in that conversation.

Comment author: kbog  (EA Profile) 20 March 2018 12:44:34AM *  0 points [-]

I never said it was up for debate. Rather, given that it is stipulated, I question whether agreements reached under such stipulations have any force or validity on reality, given that the stipulation is, in fact, false.

The reason that the conclusions made in such a scenario have a bearing on reality is that the conclusions are necessarily both fair and rational.

Please read my second response to brianwang712 where I imagine that Bob has a conversation with him. I would be curious how you would respond to Bob in that conversation.

My reply to Bob would be to essentially restate brianwang's original comment, and explain how the morally correct course of action is supported by a utilitarian principle of indifference argument, and that none of the things he says (like the fact that he is not Amy or Susie, or the fact that he is scared) are sound counterarguments.

Comment author: Jeffhe  (EA Profile) 20 March 2018 01:46:42AM 0 points [-]

1) The reason that the conclusions made in such a scenario have a bearing on reality is that the conclusions are necessarily both fair and rational.

The conclusions are rational under the stipulation that each person has an equal chance of being in anybody's position. But it is not actually rational given that the stipulation is false. So you can't just say that the conclusions have a bearing on reality because they are necessarily rational. They are rational under the stipulation, but not when you take into account what is actually the case.

And I don't see how the conclusion is fair to Bob when the conclusion is based on a false stipulation. Bob is a real person. He shouldn't be treated like he had an equal chance of being in Amy's or Susie's position, when he in fact didn't.

2) "My reply to Bob would be to essentially restate brianwang's original comment..."

Sorry, can you quote the part you're referring to?

3) "...and explain how the morally correct course of action is supported by a utilitarian principle of indifference argument."

Can you explain what this "utilitarian principle of indifference argument" is?

4) "and that none of the things he says (like the fact that he is not Amy or Susie, or the fact that he is scared) are sound counterarguments."

Please don't distort what I said. I had him say, "The fact of the matter is that I had no chance of being in Amy's or Susie's position.", which is very different from saying that he was not Amy or Susie. If he wasn't Amy or Susie, but actually had an equal chance of being either of them, then I would take the veil of ignorance approach more seriously.

I added the part that he is said because I wanted it to sound realistic. It is uncharitable to assume that that forms part of my argument.

Comment author: kbog  (EA Profile) 20 March 2018 07:14:40AM 1 point [-]

The conclusions are rational under the stipulation that each person has an equal chance of being in anybody's position. But it is not actually rational given that the stipulation is false.

The argument of both Rawls and Harsanyi is not that it just happens to be rational for everybody to agree to their moral criteria; the argument is that the morally rational choice for society is a universal application of the rule which is egoistically rational for people behind the veil of ignorance. Of course it's not egoistically rational for people to give anything up once they are outside the veil of ignorance, but then they're obviously making unfair decisions, so it's irrelevant to the thought experiment.

And I don't see how the conclusion is fair to Bob when the conclusion is based on a false stipulation

Stipulations can't be true or false - they're stipulations. It's a thought experiment for epistemic purposes.

Bob is a real person. He shouldn't be treated like he had an equal chance of being in Amy's or Susie's position, when he in fact didn't.

The reason we look at what they would agree to from behind the veil of ignorance as opposed to outside is that it ensures that they give equal consideration to everyone, which is a basic principle that appeals to us as a cornerstone of any decent moral system.

Also, to be clear, the Original Position argument doesn't say "imagine if Bob had an equal chance of being in Amy's or Susie's position, see how you would treat them, and then treat him that way." If it did, then it would simply not work, because the question of exactly how you should actually treat him would still be undetermined. Instead, the argument says "imagine if Bob had an equal chance of being in Amy's or Susie's position, see what decision rule they would agree to, and then treat them according to that decision rule."

Sorry, can you quote the part you're referring to?

The first paragraph of his first comment.

Can you explain what this "utilitarian principle of indifference argument" is?

This very idea, originally argued by Harsanyi (http://piketty.pse.ens.fr/files/Harsanyi1975.pdf).

Comment author: brianwang712 17 March 2018 07:00:58AM *  1 point [-]

Regarding the first point, signing hypothetical contracts behind the veil of ignorance is our best heuristic for determining how best to collectively make decisions such that we build the best overall society for all of us. Healthy, safe, and prosperous societies are built from lots of agents cooperating; unhappy and dangerous societies arise from agents defecting. And making decisions as if you were behind the veil of ignorance is a sign of cooperation; on the contrary, Bob's argument that you should give him a 1/3 chance of being helped even though he wouldn't have signed on to such a decision behind the veil of ignorance, simply because of the actual position he finds himself in, is a sign of defection. This is not to slight Bob here -- of course it's very understandable for him to be afraid and to want a chance of being helped given his position. Rather, it's simply a statement that if everybody argued as Bob did (not just regarding charity donations, but in general), we'd be living in a much unhappier society.

If you're unmoved by this framing, consider this slightly different framing, illustrated by a thought experiment: Let's say that Bob successfully argues his case to the donor, who gives Bob a 1/2 chance of being helped. For the purpose of this experiment, it's best to not specify who in fact gets helped, but rather to just move forward with expected utilities. Assuming that his suffering was worth -1 utility point, consider that he netted 1/2 of an expected utility point from the donor's decision to give everyone an equal chance. (Also assume that all realized painful incidents hereon are worth -1 utility point, and realized positive incidents are worth +1 utility point.)

The next day, Bob gets into a car accident, putting both him and a separate individual (say, Carl) in the hospital. Unfortunately, the hospital is short on staff that day, so the doctors + nurses have to make a decision. They can either spend their time to help Bob and Carl with their car accident injuries, or they can spend their time helping one other indivdual with a separate yet equally painful affliction, but they cannot do both. They also cannot split their time between the two choices. They have read your blog post on the EA forum and decide to flip a coin. Bob once again gets a 1/2 expected utility point from this decision.

Unfortunately, Bob's hospital stay cost him all his savings. He and his brother Dan (who has also fallen on hard times) go to their mother Karen to ask for a loan to get them back on their feet. Karen, however, notes that her daughter (Bob and Dan's sister) Emily has also just asked for a loan for similar reasons. She cannot give a loan to Bob and Dan and still have enough left over for Emily, and vice versa. Bob and Dan note that if they were to get the loan, they could both split that loan and convert it into +1 utility point each, whereas Emily would require the whole loan to get +1 utility point (Emily was used to a more lavish lifestyle and requires more expensive consumption to become happier). Nevertheless, Karen has read your blog post on the EA forum and decides to flip a coin. Bob nets a 1/2 expected utility point from this decision.

What is the conclusion from this thought experiment? Well, if decisions were made to your decision rule, providing each individual an equal chance of being helped in each situation, then Bob nets 1/2 + 1/2 + 1/2 = 3/2 expected utility points. Following a more conventional decision rule to always help more people vs. less people if everyone is suffering similarly (a decision rule that would've been agreed upon behind a veil of ignorance), Bob would get 0 (no help from the original donor) + 1 (definite help from the doctors + nurses) + 1 (definite help from Karen) = 2 expected utility points. Under this particular set of circumstances, Bob would've benefitted more from the veil of ignorance approach.

You may reasonably ask whether this set of seemingly fantastical scenarios has been precisely constructed to make my point rather than yours. After all, couldn't Bob have found himself in more situations like the donor case rather than the hospital or loan cases, which would shift the math towards favoring your decision rule? Yes, this is certainly possible, but unlikely. Why? For the simple reason that any given individual is more likely to find themselves in a situation that affects more people than a situation that affects few. In the donor case, Bob had a condition where he was in the minority; more often in his life, however, he will find himself in cases where he is in the majority (e.g., hospital case, loan case). And so over a whole lifetime of decisions to be made, Bob is much more likely to benefit from the veil-of-ignorance-type approach.

Based on your post, it seems you are hesitant to aggregate utility over multiple individuals; for the sake of argument here, that's fine. But the thought scenario above doesn't require that at all; just aggregating utility over Bob's own life, you can see how the veil-of-ignorance approach is expected to benefit him more. So if we rewind the tape of Bob's life all the way back to the original donor scenario, where the donor is mulling over whether they want to donate to help Bob or to help Amy + Susie, the donor should consider that in all likelihood Bob's future will be one in which the veil-of-ignorance approach will work out in his favor moreso than the everyone-gets-an-equal-chance approach. So if this donor and other donors in similar situations are to commit to one of these two decision rules, they should commit to the veil of ignorance approach; it would help Bob (and Amy, and Susie, and all other beneficiaries of donations) the most in terms of expected well-being.

Another way to put this is that, even if you don't buy that Bob should put himself behind a veil of ignorance because he knows he doesn't have an equal chance of being in Amy's and Susie's situation, and so shouldn't decide to sign a cooperative agreement with Amy and Susie, you should buy that Bob is in effect behind a veil of ignorance regarding his own future, and therefore should sign the contract with Amy and Susie because this would be cooperative with respect to his future selves. And the donor should act in accord with this hypothetical contract.

I would respond to the second point, but this post is already long enough, and I think what I just laid out is more central.

I will also be bowing out of the discussion at this point – not because of anything you said or did, but simply since it took me much more time to write up my thoughts than I would have liked. I did enjoy the discussion and found it useful to lay out my beliefs in a thorough and hopefully clear manner, as well as to read your thoughtful replies. I do hope you decide that EA is not fatally flawed and to stick around the community :)

Comment author: Jeffhe  (EA Profile) 18 March 2018 09:53:22PM *  0 points [-]

Hey Brian,

No worries! I've enjoyed our exchange as well - your latest response is both creative and funny. In particular, when I read "They have read your blog post on the EA forum and decide to flip a coin", I literally laughed out loud (haha). It's been a pleasure : ) If you change your mind and decide to reply, definitely feel welcome to.

Btw, for the benefit of first-time readers, I've updated a portion of my very first response in order to provide more color on something that I originally wrote. In good faith, I've also kept in the response what I originally wrote. Just wanted to let you know. Now onto my response.

You write, "In the donor case, Bob had a condition where he was in the minority; more often in his life, however, he will find himself in cases where he is in the majority (e.g., hospital case, loan case). And so over a whole lifetime of decisions to be made, Bob is much more likely to benefit from the veil-of-ignorance-type approach."

This would be true if Bob has an equal chance of being in any of the positions of a given future trade off situation. That is, Bob would have a higher chance of being in the majority in any given future trade off situation if Bob has an equal chance of being in any of the positions of a given trade off situation. Importantly, just because there is more positions on the majority side of a trade off situation, that does not automatically mean that Bob has a higher chance of being among the majority. His probably or chance of being in each of the positions is crucial. I think you were implicitly assuming that Bob has an equal chance of being in any of the positions of a future trade off situation because he doesn't know his future. But, as I mentioned in my previous post, it would be a mistake to conclude, from a lack of knowledge about one's position, that one has an equal chance of being in any one's position. So, just because Bob doesn't know anything about his future, it does not mean that he has an equal chance of being in any of the positions in the future trade off situations that he is involved in.

In my original first response to you, I very briefly explained why I think people in general do not have an equal chance of being in anybody's position. I have sense expanded that explanation. If what I say there is right, then it is not true that "over a whole lifetime of decisions to be made, Bob [or anyone else] is much more likely to benefit from the veil-of-ignorance-type approach [than the equal-chance approach]."

All the best!

Comment author: Jeffhe  (EA Profile) 22 March 2018 01:32:28AM *  0 points [-]

Hey Brian,

I just wanted to note that another reason why you might not want to use the veil-of-ignorance approach to justify why we should save the greater number is that it would force you to conclude that, in a trade off situation where you can either save one person from an imminent excruciating pain (i.e. being burned alive) or another person from the same severe pain PLUS a third person from a very minor pain (e.g. a sore throat), we should save the second and third person and give 0 chance to the first person.

I think it was F. M. Kamm who first raised this objection to the veil-of-ignorance approach in his book Morality, Mortality Vol 1. (I haven't actually read the book). Interestingly, kbog - another person I've been talking with on this forum - accepts this result. But I wonder if others like yourself would. Imagine Bob, Amy and Susie were in a trade off situation of the kind I just described, and imagine that Bob never actually had a chance to be in Amy's or Susie's position. In such a situation, do you think you should just save Amy and Susie?

Comment author: brianwang712 23 March 2018 02:39:21PM 0 points [-]

Yes, I accept that result, and I think most EAs would (side note: I think most people in society at large would, too; if this is true, then your post is not so much an objection to the concept of EA as it is to common-sense morality as well). It's interesting that you and I have such intuitions about such a case – I see that as in the category of "being so obvious to me that I wouldn't even have to hesitate to choose." But obviously you have different intuitions here.

Part of what I'm confused about is what the positive case is for giving everyone an equal chance. I know what the positive case is for the approach of automatically saving two people vs. one: maximizing aggregate utility, which I see as the most rational, impartial way of doing good. But what's the case for giving everyone an equal chance? What's gained from that? Why prioritize "chances"? I mean, giving Bob a chance when most EAs would probably automatically save Amy and Susie might make Bob feel better in that particular situation, but that seems like a trivial point, and I'm guessing is not the main driver behind your reasoning.

One way of viewing "giving everyone an equal chance" is to give equal priority to different possible worlds. I'll use the original "Bob vs. a million people" example to illustrate. In this example, there's two possible worlds that the donor could create: in one possible world Bob is saved (world A), and in the other possible world a million people are saved (world B). World B is, of course, the world that an EA would create every time. As for world A, well: can we view this possible world as anything but a tragedy? If you flipped a coin and got this outcome, would you not feel that the world is worse off for it? Would you not instantly regret your decision to flip the coin? Or even forget flipping the coin, we can take donor choice out of it; wouldn't you feel that a world where a hurricane ravaged and destroyed an urban community where a million people lived is worse than a world where that same hurricane petered out unexpectedly and only destroyed the home of one unlucky person?

If so, then why give tragic world A any priority at all, when we can just create world B instead? I mean, if you were asked to choose between getting a delicious chocolate milkshake vs. a bee sting, you wouldn't say "I'll take a 50% chance of each, please!" You would just choose the better option. Giving any chance, no matter how small, to the bee sting would be too high. Similarly, giving any priority to tragic world A, even 1 in 10 million, but be too high.

Comment author: Jeffhe  (EA Profile) 23 March 2018 04:35:44PM *  0 points [-]

Hi Brian,

I think the reason why you have such a strong intuition of just saving Amy and Susie in a choice situation like the one I described in my previous reply is that you believe Amy's burning to death plus Susie's sore throat involves more or greater pain than Bob's burning to death. Since you think minimizing aggregate pain (i.e. maximizing aggregate utility) is what we should do, your reason for just Amy and Susie is clear.

But importantly, I don't share your belief that Amy's burning to death and Susie's sore throat involves more or greater pain than Bob's burning to death. On this note, I have completely reworked my response to Objection 1 a few days ago to make clear why I don't share this belief, so please read that if you want to know why. On the contrary, I think Amy's burning to death and Susie's sore throat involves just as much pain as Bob's burning to death.

So part of the positive case for giving everyone an equal chance is that the suffering on either side would involve the same LEVEL/AMOUNT of pain (even though the suffering on Amy's and Susie's side would clearly involve more INSTANCES of pain: i.e. 2 vs 1.)

But even if the suffering on Amy's and Susie's side would involve slightly greater pain (as you believe), there is a positive case for giving Bob some chance of being saved, rather than 0. And that is that who suffers matters, for the reason I offered in my response to Objection 2. I think that response provides a very powerful reason for giving Bob at least some chance, and not no chance at all, even if his pain would be less great than Amy's and Susie's together. (My response to Objection 3 makes clear that giving Bob some chance is not in conflict with being impartial, so that response is relevant too if you think doing so is being partial)

At the end of the day, I think one's intuitions are based on one's implicit beliefs and what one implicitly takes into consideration. Thus, if we shared the same implicit beliefs and implicitly took the same things into consideration, then we would share the same intuitions. So one way to view my essay is that it tries to achieve its goal by doing two things:

1) Challenging a belief (e.g. that Amy's burning to death plus Susie's sore throat involves more pain than Bob's burning to death) that in part underlies the differences in intuition between me and people like yourself.

2) Reminding people of another important moral fact that should figure in their implicit thought processes (and thus be reflected in their intuitions): that who suffers matters. This moral fact is often forgotten about, which skews people's intuitions. Once this moral fact is seriously taken into account, I bet people's intuitions would not be the same. Importantly, I bet the vast majority of people (including yourself) would feel that giving Bob some chance of being saved is more appropriate than none, EVEN IF you still thought that Amy's pain and Susie's pain involve slightly more pain than Bob's.

Comment author: Michael_S 13 March 2018 03:30:33AM 7 points [-]

Choice situation 3: We can either save Al, and four others each from a minor headache or Emma from one major headache. Here, I assume you would say that we should save Emma from the major headache

I think you're making a mistaken assumption here about your readers. Conditional on agreeing 5 minor headaches in one person is worse than 1 major headache in one person, I would feel exactly the same if it were spread out over 5 people. I expect the majority of EAs would as well.

Comment author: Jeffhe  (EA Profile) 13 March 2018 11:52:54PM *  -1 points [-]

Hi Michael,

Thanks very much for your response.

UPDATE (ADDED ON MAR 16):

I have shortened the original reply as it was a bit repetitive and made improvements in its clarity. However, it is still not optimal. Thus I have written a new reply for first-time readers to better appreciate my position. You can find the somewhat improved original reply at the end of this new reply (if interested):

To be honest, I just don't get why you would feel the same if the 5 minor headaches were spread across 5 people. Supposing that 5 minor headaches in one person is (experientially) worse than 1 major headache in one person (as you request), consider WHAT MAKES IT THE CASE that the single person who suffers 5 minor headaches is worse off than a person who suffers just 1 major headache, other things being equal.

Well, imagine that we were this person who suffers 5 minor headaches. We suffer one minor headache one day, suffer another minor headache sometime after that, then another after that, etc. By the end of our 5th minor headache, we will have experienced what it’s like to go through 5 minor headaches. After all, we went through 5 minor headaches! Note that the what-it’s-like-of-going-through-5-headaches consists simply in the what-it’s-like-of-going-through-the-first-minor-headache then the what-it’s-like-of-going-through-the-second-minor-headache then the what-it’s-like-of-going-through-the-third-minor-headache, etc. Importantly, the what-it’s-like-of-going-through-5-headaches is NOT whatever we experience right after having our 5th headache (e.g. exhaustion that might set in after going through many headaches or some super painful headache that is the "synthesis" of the intensity of the past 5 minor headaches). It is NOT a singular/continuous feeling like the feeling we have when we're experiencing a normal pain episode. It is simply this: the what-it’s-like of going through one minor headache, then another (sometime later), then another, then another, then another. Nothing more. Nothing less.

Now, by the end of our 5th minor headache, we might have long forgotten about the first minor headache because, say, it happened so long ago. So, by the end of our 5th minor headache, we might not have an accurate appreciation of what it’s like to go through 5 minor headaches even though we in fact have experienced what it’s like to go through 5 minor headaches. As a result, if someone asked us whether we’ve been through more pain due to our minor headaches or more pain through a major headache that, say, we recently experienced, we would likely incorrectly answer the latter.

But, if we did have an accurate appreciation of what it’s like to go through 5 minor headaches, say, because we experienced all 5 minor headaches rather recently, then there will be a clear sense to us that going through them was (experientially) worse than the major headache. The 5 minor headaches would each be “fresh in our mind”, and thus the what-it’s-like-of-going-through-5-minor-headaches would be “fresh in our mind”. And with that what-it’s-like fresh in mind, it seems clear to us that it caused us more pain than the major headache did.

Now, a headache being “fresh in our mind” does not mean that the headache needs to be so fresh that it is qualitatively the same as experiencing a real headache. Being fresh in our mind just means we have an accurate appreciation/idea of what it felt like, just as we have some accurate idea of what our favorite dish tastes like.

Because we have appreciations of our past pains (to varying degrees of accuracy), we sometimes compare them and have a clear sense that one set of pains is worse than another. But it is not the comparison and the clear sense we have of one set of pain being worse than another that ultimately makes one set of pains worse than another. Rather, it is the other way around. It is the what-it’s-like-of-having-5-minor-headaches that is worse – more painful – than the what-it’s-like-of-having-a-major-headache. And if we have an accurate appreciation of both what-it’s-likes, then we will conclude the same. But, when we don’t, then our own conclusions could be wrong, like in the example provided earlier of a forgotten minor headache.

So, at the end of the day, what makes a person who has 5 minor headaches worse off than a person who has 1 major headache is the fact that he experienced what-it’s-like-of-going-through-5-minor-headaches.

But, in the case where the 5 minor headaches are spread across 5 people, there is no longer the what-it’s-like-of-going-through-5-minor-headaches because each of the 5 headaches is experienced by a different person. As a result, the only what-it’s-like present is the what-it’s-like-of-experiencing-one-minor-headache. Five different people each experience this what-it’s-like, but no one experiences what-it’s-like-of-going-through-5-minor-headaches. Moreover, the what-it’s-like of each of the 5 people cannot be linked to form the what-it’s-like-of-experiencing-5-minor headaches because the 5 people are experientially independent beings.

Now, it's clearly the case that the what-it’s-like-of-going-through-1-minor-headache is not worse than the what-it’s-like-of-going-through-a-major-headache. Given what I said in the previous paragraph, therefore, there is nothing present that could be worse than the what-it’s-like-to-go-through-a-major-headache in the case where the 5 minor headaches are spread across 5 people. Therefore, 5 minor headaches, spread across 5 people, cannot be (and thus is not) worse (experientially speaking) than one major headache.

Therefore, "conditional on agreeing 5 minor headaches in one person is worse than 1 major headache in one person, ... [one should not] feel exactly the same if it were spread out over 5 people."!

Finally, since 5 headaches, spread across 5 people, is not EXPERIENTIALLY worse than another person's single major headache, therefore the case in which Emma would suffer a major headache is MORALLY worse than the case in which 5 different people would each suffer a minor headache. (If you disagree with this, please see Objection 1.2 and my response to it) Therefore what I said in choice situation 3 holds.

-

The somewhat improved though sub-optimal original reply:

To be honest, I just don't get why you would feel the same if the pains were spread out over 5 people. I mean, when the 5 minor headaches occur in a single person, then FOR that person, there is a very clear sense how the 5 headaches are worse to endure than 1 major headache. But once the 5 minor headaches are spread across 5 different people, that clear sense is lost because each of the 5 people only experiences at most 1 minor headache. In each experiencing only 1 minor headache, NOT ONE of the 5 people experience something worse than a major headache (e.g., what Emma would go through). So none of them would individually be worse off than Emma. Are you really ready to say that the 5 of them together are worse off than Emma? But in what sense? Certainly not in any experiential sense (since none of them individually experiences anything worse than a major headache and they are experientially independent of each other). But then I don't see what other sense there are that matters.

Comment author: Michael_S 14 March 2018 01:55:51AM 4 points [-]

If a small headache is worth 2 points of disutility and a large headache is worth 5, the total amount of pain is worse because 2*5>5. It's a pretty straightforward total utilitarian interpretation.I find it irrelevant whether there's one person who's worse off; the total amount of pain is larger.

I'll also note that I find the concept of personhood to be incoherent in itself, so it really shouldn't matter at all whether it's the same "person". But while I think an incoherent personhood concept is sufficient for saying there's no difference if it's spread out over 5 people, I don't think it's necessary. Simple total utilitarianism gets you there.

Comment author: Jeffhe  (EA Profile) 14 March 2018 03:33:17AM *  0 points [-]

I assume we agree that we determine the points of disutility of the minor and major headache by how they each feel to someone. Since the major headache hurts more, it's worth more points (5 in this case).

But, were a single person to suffer all 5 minor headaches, he would end up having felt what it is like to go through 5 headaches - a feeling that would make him say things like "Going through those 5 minor headaches is worse/more painful than a major headache" or "There was more/greater/larger pain in going through those 5 minor headaches than a major headache".

We find these statements intelligible. But that is because we're at a point in life where we too have felt what it is like to go through multiple minor pains, and we too can consider (i.e. hold before our mind) a major pain in isolation, and compare these feelings: the what-it's-like of going through multiple minor pains vs the what-it's-like of going through a major pain.

But once the situation is that the 5 minor headache are spread across 5 people, there is no longer the what-it's-like-of-going-through-5-minor-headaches, just 5 independent what-it's-likes-of-going-through-1-minor-headache. As a result, in this situation, when you say "the total amount of pain [involved in 5 minor headaches] is worse [one major headache]", or that "the total amount of pain [involved in 5 minor headaches] is larger [than one major headache], there is nothing to support their intelligibility.

So, I honestly don't understand these statements. Sure, you can use numbers to show that 10 > 5, but there is no reality that that maps on to (i.e. describes). I worry that representing pain in numbers is extremely misleading in this way.

Regarding personhood, I think my position just requires me to be committed to there being a single subject-of-experience (is that what you meant by person?) who extends through time to the extent that it can be the subject of more than one pain episode. I must admit I know very little about the topic of personhood. On that note, any further comments that help your position and question mine would be helpful. Thanks.

Comment author: Michael_S 14 March 2018 01:31:24PM 1 point [-]

I think this is confusing means of estimation with actual utils. You can estimate that 5 headaches are worse than one by asking someone to compare five headaches vs. one. You could also produce an estimate by just asking someone who has received one small headache and one large headache whether they would rather receive 5 more small headaches or one more large headache. But there's no reason you can't apply these estimates more broadly. There's real pain behind the estimates that can be added up.

Comment author: Jeffhe  (EA Profile) 14 March 2018 07:15:53PM *  0 points [-]

I agree with the first half of what you said, but I don't agree that "there's no reason you can't apply these estimates more broadly (e.g. to a situation where 5 minor headaches are spread across 5 persons).

Sure, a person who has felt only one minor headache and one major headache can say "If put to the choice, I think I'd rather receive another major headache than 5 more minor headaches", but he says this as a result of imagining roughly what it would be like for him to go through 5 of this sort of minor headache and comparing that to what it was like for him to go through the one major headache.

Importantly, what is supporting the intelligibility of his statement is STILL the what-it's-like-of-going-through-5-minor-headaches, except that this time (unlike in my previous reply), the what-it's-like-of-going-through-5-minor-headaches is imagined rather than actual.

But in the situation where the 5 minor headaches are spread across 5 people, there isn't a what-it's-like-of-going-through-5-minor-headaches, imagined or actual, to support the intelligibility of the claim that 5 minor headaches (spread across 5 people) are worse or more painful than a major headache. What there is are five independent what-it's-like-of-going-through-1-minor-headache, since

1) the 5 people are obviously experientially independent of each other (i.e. each of them can only experience their own pain and no one else's), and

2) each of the 5 people experience just one minor headache.

But these five independent what-it's-likes can't support the intelligibility of the above claim. None of these what-it-likes are individually worse or more painful than the major headache. And they cannot collectively be worse or more painful than the major headache because they are experientially independent of each other.

The what-it's-like-of-going-through-5-minor-headaches is importantly different from five independent what-it's-like-of-going-through-1-minor-headache, and only the former can support the intelligibility of a claim like 5 minor headaches are worse than a major headache. But since the former what-it's-like can only occur in a single subject-of-experience, that means that, more specifically, the former what-it's-like can only support the intelligibility of a claim like 5 minor headaches, all had by one person, is worse than a major headache. It cannot support a claim like 5 minor headaches, spread across 5 people, are worse than a major headache.

Comment author: Michael_S 15 March 2018 04:17:22AM *  2 points [-]

It's the same 5 headaches. It doesn't matter if you're imagining one person going through it on five days or imagine five different people going through it on one day. You can still imagine 5 headaches. You can imagine what it would be like to say live the lives of 5 different people for one day with and without a minor headache. Just as you can imagine living the life of one person for 5 days with and without a headache. The connection to an individual is arbitrary and unnecessary.

Now this goes into the meaningless of personhood as a concept, but what would even count as the individual in your view? For simplicity, let's say 2 modest headaches in one person are worse than one major headache. What if between the two headaches, the person gets a major brain injury and their personality is completely altered (as has happened in real life). Let's say they also have no memory of their former self. Are they no longer the same person? Under your view, is it no longer possible to say that the two modest headaches are worse than the major headache? If it still is, why is it possible after this radical change in personality with no memory continuity but impossible between two different people?

Comment author: Jeffhe  (EA Profile) 16 March 2018 01:46:55AM *  0 points [-]

If I'm understanding you correctly, you essentially deny that there is a metaphysical difference (i.e. a REAL difference) between

A. One subject-of-experience experiencing 5 headaches over 5 days (say, one headache per day), and

B. Five independent subjects-of-experience each experiencing 1 headache over 5 days (say, each subject has their 1 headache on a different day, such that on any given day, only one of them has a headache).

And you deny this BECAUSE you think that, in case A for example, there simply is no fact of the matter as to how many subjects-of-experience there were over those 5 days IN THE FIRST PLACE, and NOT because you think one subject-of-experience going through 5 headaches IS IDENTICAL to five independent subjects-of-experience each going through 1 headache.

Also, you are not simply saying that we don't KNOW how many subjects of experience there were over those 5 days in case A, but that there actually isn't an answer to how many there were. The indeterminate-ness is "built into the world" so to speak, and not just existing in our state of mind.

You therefore think it is arbitrary to say that one subject-of-experience experienced all 5 headaches over the 5 days or that 5 subjects-of-experience each experienced 1 headache over the 5 days.

But importantly, IF there is a fact of the matter as to how many subjects-of-experience there is in any given time period, you would NOT continue to think that there is no metaphysical difference between case A and B. And this is because you agree that one subject-of-experience going through 5 headaches is not identical to five independent subjects-of-experience each going through 1 headache. You would say, "Obviously they are not identical. The problem, however, is that - in case A, for example - there simply is no fact of the matter as to how many subjects-of-experience there were over those 5 days IN THE FIRST PLACE so saying that one subject-of-experience experienced all 5 headaches is arbitrary."

I hope that was an accurate portrayal of your view.

Let us then try to build some consensus from the ground up:

First, there is surely experience. That there is experience, whether it be pain experience or color experience or whatever, is the most obvious truth there is. I assume you don't deny that. Ok, so we agree that

1) there is experience.

Second, well, each experience is clearly SOMEONE'S experience - it is experience FOR SOMEONE. Suppose there is a pain experience - a headache. Someone IN PARTICULAR experiences that headache. Let's suppose you're not experiencing it and that I am. Then I am that particular someone. I assume you don't deny any of that. Ok, so we agree that

2) there is not just experience, but that for every experience, there is also a particular subject-of-experience who experiences it, whether or not a particular subject-of-experience can also extend through time and be the subject of multiple experiences.

That's all the consensus building I want to do right now.

Now, let me report something about myself (for the sake of argument, just assume it's true): I felt 5 headaches over the past 5 days. Here (just as in case A) you would say that there is no fact of the matter whether one subject-of-experience felt those 5 headaches or five different subjects-of-experience felt those 5 headaches, even though the “I” in “I just felt 5 headaches” makes it SOUND LIKE there was only one subject-of-experience.

If I then say that, “no no, there was just one subject-of-experience who felt those 5 headaches”, your question (and challenge) to me is what is my criteria for saying that there was just one subject-of-experience and not five. More specifically, you ask whether memory-continuity and personality-continuity are necessary conditions for being the same subject-of-experience over the 5 days, “same” in the sense of being numerically identical and not qualitatively identical.

Here’s my answer:

I’m sure philosophers have tried to come up with various criteria. Presumably that’s what philosophers engaged in the field called “personal identity” in part do, though I don’t know much about that field. Anyways, presumably they are all trying to come up with a criteria that would neatly accommodate all our intuitive judgements in specific (perhaps imagined) cases concerning personal identity (e.g., split brain cases). A criteria that succeeded in doing that would presumably be regarded as the “true” or “correct” criteria. In other words, the ONLY way philosophers have for testing their criteria is presumably to see if their criteria would yield results that accord with our intuitions. Moreover, if the “correct” criteria is found, philosophers are presumably going to say that it is correct not merely in the sense that it accurately describes the implicit/sub-conscious assumptions that we hold about personal identity which have led us to have the intuitions we have. Indeed, presumably, they are going to say that the criteria is correct in the stronger sense that it accurately describes the conditions under which a subject-of-experience IN REALITY is the same numerical subject over time. Insofar as they would say this, philosophers are assuming that our intuitive judgements represent the truth (i.e. the way things actually are). For only if the intuitions represented the truth would it be the case that a criteria that accommodated all of them would thereby be a criteria that described reality.

But then the question is, do our intuitions represent the truth? I don’t know, and so even if I were able to give you a criteria that accommodated all our intuitions and that, according to this criteria, there was only one subject-of-experience who experienced all 5 headaches over those 5 days, I would not have, in any convincing way, demonstrated that there was in fact only one subject-of-experience who experienced all 5 headaches over those 5 days, instead of 5 independent subjects-of-experience who each experienced 1 headache. For you can always ask what reasons I have for taking our intuitions to represent the truth. I don’t think there is a convincing answer. So I don’t think presenting you with criteria will ultimately satisfy you, at least I don’t think it should.

Of course, that’s not to say that we wouldn’t know what would have to be the case for it to be true that one subject-of-experience experienced all 5 headaches over the 5 days: That would be true just in case one subject-of experience IN FACT experienced all 5 headaches over the 5 days. We just don’t know if that is the case. And I have just argued above that providing a criteria that accords with all our intuitions won’t really help us to know if that is the case either.

So, what reason can I give for believing that there really was just one subject-of-experience who experienced all 5 headaches over those 5 days? Well, what reason can YOU give for saying that there isn’t a fact of the matter as to whether there was one subject-of-experience who experienced all 5 headaches over those 5 days or give independent subjects-of-experience who each experienced only 1 headache over those 5 days?

Are we at a standstill? We would be if neither of us can provide reasons for our views. Your view attributes a fundamental indeterminate-ness to the world itself, and I wonder what reason you have for such a view.

I have a reason for believing my view. But this reply is already very long, so before I describe my reason, I would just like some confirmation that we’re on the same page. Thanks.

P.S. I'll just add (as a more direct response to the first paragraph of your response): Yes, I can imagine 5 headaches by either imagining myself in the shoes of one person for 5 days or imagining myself in the shoes of 5 different people for one day each. In both cases, I imagine 5 headaches. True. BUT. When I imagine myself in the shoes of 5 different people for one day each, what is going on is that one subject-of-experience (i.e. me), takes on the independent what-it's-likes (i.e. experiences) associated with the 5 different people, and IN DOING SO, LINKS THESE what-it's-likes - which in reality are experientially independent of each other - TOGETHER IN ME. So ultimately, when I imagine myself in the shoes of 5 different people for one day each, I am, in effect, imagining what it's like to go through 5 headaches. But in reality, there is no such what-it's-like among the 5 different people. The only what-it's-like present is the what-it's-like-of-going-through-1-headache, which each of the 5 different people would experience.

In essence, what I am saying is that when you or I imagine ourselves in the shoes of 5 different people for a day each, we do end up with the (imagined) what-it's-like-of-going-through-5-headaches, but there is no such what-it's-like in reality among those different 5 people. But there needs to be in order for their 5 independent headaches to be worse than a major headache. I hope that made sense. If it didn't, then I guess you can ignore these last two paragraphs.

P.S.S. As a more direct response to your questions in the second paragraph of your response: it would still be possible IF the person is still the same subject-of-experience after the radical change in personality and loss of memory. It is impossible between two different people because they are numerically different subjects-of-experience.

Comment author: Michael_S 17 March 2018 02:21:08AM *  0 points [-]

I'd say I'm making two arguments:

1) There is no distinct personal identity; rather it's a continuum. The you today is different than the you yesterday. The you today is also different from the me today. These differences are matters of degree. I don't think there is clearly a "subject of experience" that exists across time. There are too many cases (eg. brain injuries that change personality) that the single consciousness theory can't account for.

2) Even if I agreed that there was a distinct difference in kind that represented a consistent person, I don't think it's relevant to the moral accounting of experiences. Ie. I don't see why it matters whether experiences are "independent" or not. They're real experiences of pain

Comment author: Jeffhe  (EA Profile) 17 March 2018 03:31:30AM *  0 points [-]

1) I agree that the me today is different from the me yesterday, but I would say this is a qualitative difference, not a numerical difference. I am still the numerically same subject-of-experience as yesterday's me, even though I may be qualitatively different in various physical and psychological ways from yesterday's me. I also agree that the me today is different from the you today, but here I would say that the difference is not merely qualitative, but numerical too. You and I are numerically different subjects-of-experience, not just qualitatively different.

Moreover, I would agree that our qualitative differences are a matter of degrees and not of kind. I am not a chair and you a subject-of-experience. We are both embodied subjects-of-experience (i.e. of that kind), but we differ to various degrees: you might be taller or lighter-skinned, etc

I thus agreed with all your premises and have shown that they can be compatible with the existence of a subject-of-experience that extends through time. So I don't quite see a convincing argument for the lack of the existence of a subject-of-experience that extends through time.

2) So here you're granting me the existence of a subject-of-experience that extends through time, but you're saying that it makes no moral difference whether one subject-of-experience suffers 5 minor headaches or 5 numerically different subjects-of-experience each experience 1 minor headache, and that therefore, we should just focus on the number of headaches.

Well, as I tried to explain in previous replies, when there is one subject-of-experience who extends through time, it is possible for him to experience what it's like of going through 5 minor headaches, since after all, he experiences all 5 minor headaches (whether he remembers experiencing them or not). Moreover, it is ONLY the what-it's-like-of-going-through-5-minor-headaches that can plausibly be worse or more painful than the what-it's-like-of-going-through-a-major-headache.

In contrast, when the 5 minor headaches are spread across 5 people, each of the 5 people experiences only what it's like to go through 1 minor headache. Moreover, the what-it's-like-of-going-through-1-headache CANNOT plausibly be worse or more painful than the what-it's-like-of-going-through-a-major-headache.

Thus it matters whether the 5 headaches are experienced all by a single subject-of-experience (i.e. experienced together) or spread across five experientially independent subject-of-experiences (i.e. experienced independently). It matters because, again, ONLY when the 5 headaches are experienced together can there be the what-it's-like-of-going-through-5-minor-headaches and ONLY that can plausibly be said to be worse or more painful than the what-it's-like-of-going-through-a-major-headache.

P.S. I have extensively edited my very first reply to you, so that it is more clear and detailed for first-time readers. I would recommend giving it a read if you have the time. Thanks.

Comment author: kbog  (EA Profile) 20 March 2018 08:02:36AM *  0 points [-]

To be honest, I just don't get why you would feel the same if the 5 minor headaches were spread across 5 people

Because I don't have any reason to feel different. Imagine if I said, "5 headaches among tall people would be better than 5 headaches among short people." And then you said, "no, it's the same either way. Height is irrelevant." And then I replied, "I just don't get why you would feel the same if the people are tall or short!" In that case, clearly I wouldn't be giving you a response that carries any weight. If you want to show that the cases are different in a relevant way, then you need to spell it out. In the absence of reasons to say that there is a difference, we assume by default that they're similar.

Now, it's clearly the case that the what-it’s-like-of-going-through-1-minor-headache is not worse than the what-it’s-like-of-going-through-a-major-headache. Given what I said in the previous paragraph, therefore, there is nothing present that could be worse than the what-it’s-like-to-go-through-a-major-headache in the case where the 5 minor headaches are spread across 5 people. Therefore, 5 minor headaches, spread across 5 people, cannot be (and thus is not) worse (experientially speaking) than one major headache.

The third sentence does not follow from the second. This is like saying "there is nothing present in a Toyota Corolla that could make it weigh more than a Ford F-150, therefore five Toyota Corollas cannot weigh more than a Ford F-150." Just because there is no one element in a set of events that is worse than a bad thing doesn't mean that the set of events is not worse than the bad thing. There are lots of events where badness increases with composition, even without using aggregative utilitarian logic. E.g.: it is okay to have sex with Michelle, and it is okay to marry Tiffany, but it is not okay to do both.

Comment author: Jeffhe  (EA Profile) 20 March 2018 07:00:50PM *  0 points [-]

1) "Because I don't have any reason to feel different."

Ok, well, that comes as a surprise to me. In any case, I hope after reading my first reply to Michael_S, you at least sort of see how it could be possible that someone like I would feel surprised by that, even if you don't agree with my reasoning. In other words, I hope you at least sort of see how it could be possible that someone who would clearly agree with you that, say, 5 minor headaches all had by 1 tall person is experientially just as bad as 5 minor headaches all had by 1 short person, might still disagree with you that 5 minor headaches all had by 1 person is experientially just as bad as 5 minor headaches spread across 5 people.

2) "If you want to show that the cases are different in a relevant way, then you need to spell it out. In the absence of reasons to say that there is a difference, we assume by default that they're similar."

That's what my first reply to Michael_S, in effect, aimed to do.

3) "The third sentence does not follow from the second. This is like saying "there is nothing present in a Toyota Corolla that could make it weigh more than a Ford F-150, therefore five Toyota Corollas cannot weigh more than a Ford F-150." Just because there is no one element in a set of events that is worse than a bad thing doesn't mean that the set of events is not worse than the bad thing. There are lots of events where badness increases with composition, even without using aggregative utilitarian logic. E.g.: it is okay to have sex with Michelle, and it is okay to marry Tiffany, but it is not okay to do both."

Your reductio-by-analogy (I made that phrase up) doesn't work, because your analogy is relevantly different. In your analogy, we are dealing with the relation of _ being heavier than _, whereas I'm dealing with the relation of _ being experientially worse than _. These relations are very different in nature: one is quantitative in nature, the other is experiential in nature. You might insist that this is not a relevant difference, but I think it is when one really slows down to think about exactly what is it that makes 5 minor headaches experientially worse than a major headache.

As I mentioned, the answer is the what-it's-like-of-going-through-5-minor-headaches. That is, the what-it's-like of going through one minor headache, then another (sometime later), then another, then another, then another. It's THAT SPECIFIC WHAT-IT'S-LIKE that can plausibly be experientially worse than a major headache. It's THAT SPECIFIC WHAT-IT'S-LIKE that can plausibly be "shittier" or "sucker" than a major headache.

However, when the 5 minor headaches are spread across 5 people, there is just 5 what-it's-likes-of-going-through-1-minor-headache, and no single what-it's-like-of-going-through-5-minor-headaches. Why? Because each of the minor headaches in this situation would be felt by a numerically non-identical subject-of-experience (i.e. 5 people), and numerically different subjects-of-experience cannot have their experiences "linked". Otherwise, they would not be numerically different.

Therefore, only 5 minor headaches, when all had by one subject-of-experience (i.e. one person) can they be experientially worse than one major headache. And therefore, 5 minor headaches, when all had by one person, is experientially worse than 5 minor headaches, spread across 5 people.

I think what I just said above shows clearly how the relation of _ being experientially worse than _ is impacted by whether the 5 minor headaches are all had by one person or spread across 5 different people. Whereas the relation of _ being heavier than _ is not similarly affected. So that is the relevant difference.

I hope you can really consider what I'm saying here. Thanks.

Comment author: kbog  (EA Profile) 20 March 2018 09:33:55PM *  0 points [-]

I hope you at least sort of see how it could be possible that someone who would clearly agree with you that, say, 5 minor headaches all had by 1 tall person is experientially just as bad as 5 minor headaches all had by 1 short person, might still disagree with you that 5 minor headaches all had by 1 person is experientially just as bad as 5 minor headaches spread across 5 people.

Well I can see how it is possible for someone to believe that. I just don't think it is a justified position, and if you did embrace it you would have a lot of problems. For instance, it commits you to believing that it doesn't matter how many times you are tortured if your memory is wiped each time. Because you will never have the experience of being tortured a second time.

In your analogy, we are dealing with the relation of _ being heavier than _, whereas I'm dealing with the relation of _ being experientially worse than _. These relations are very different in nature: one is quantitative in nature, the other is experiential in nature.

There are two rooms, painted bright orange inside. One person goes into the first room for five minutes, five people go into the second for one minute. If we define orange-perception as the phenomenon of one conscious mind's perception of the color orange, the amount of orange-perception for the group is the same as the amount of orange-perception for the one person.

Something being experiential doesn't imply that it is not quantitative. We can clearly quantify experiences in many ways, e.g. I had two dreams, I was awake for thirty seconds, etc. Or me and my friends each saw one bird, and so on.

However, when the 5 minor headaches are spread across 5 people, there is just 5 what-it's-likes-of-going-through-1-minor-headache, and no single what-it's-like-of-going-through-5-minor-headaches.

Yes, but the question here is whether 5 what-it's-lies-of-going-through-1-minor-headache is 5x worse than 1 minor headache. We can believe this moral claim without believing that the phenomenon of 5 separate headaches is phenomenally equivalent to 1 experience of 5 headaches. There are lots of cases where A is morally equivalent to B even though A and B are physically or phenomenally different.

Comment author: Jeffhe  (EA Profile) 20 March 2018 10:39:04PM *  0 points [-]

1) "Well I can see how it is possible for someone to believe that. I just don't think it is a justified position, and if you did embrace it you would have a lot of problems. For instance, it commits you to believing that it doesn't matter how many times you are tortured if your memory is wiped each time. Because you will never have the experience of being tortured a second time."

I disagree. I was precisely trying to guard against such thoughts by enriching my first reply to Michael_S with a case of forgetfulness. I wrote, "Now, by the end of our 5th minor headache, we might have long forgotten about the first minor headache because, say, it happened so long ago. So, by the end of our 5th minor headache, we might not have an accurate appreciation of what it’s like to go through 5 minor headaches EVEN THOUGH we in fact have experienced what it’s like to go through 5 minor headaches." (I added the caps here for emphasis)

The point I was trying to make in that passage is that if one person (i.e. one subject-of-experience) experienced all 5 minor headaches, then whether he remembers them or not, the fact of the matter is that HE felt all of them, and insofar as he has, he is experientially worse off than someone who only felt a major headache. Of course, if you asked him at the end of his 5th minor headache whether HE thinks he's had it worse than someone with a major headache, he may say "no" because, say, he has forgotten about some of the minor headaches he's had. But that does NOT MEAN that, IN FACT, he did not have it worse. After all, the what-it's-like-of-going-through-5-minor-headaches is experentially worse than one major headache, and HE has experienced the former, whether he remembers it or not.

So, if my memory is wiped each time after getting tortured, of course it still matters how many times I'm tortured. Because I WILL have the experience of being tortured a second time, whether or not I VIEW that experience as such.

2) "There are two rooms, painted bright orange inside. One person goes into the first room for five minutes, five people go into the second for one minute. If we define orange-perception as the phenomenon of one conscious mind's perception of the color orange, the amount of orange-perception for the group is the same as the amount of orange-perception for the one person.

Something being experiential doesn't imply that it is not quantitative. We can clearly quantify experiences in many ways, e.g. I had two dreams, I was awake for thirty seconds, etc. Or me and my friends each saw one bird, and so on."

My point wasn't that we can't quantify experience in various ways, but that relations of an experiential nature, like the relation of X being experientially worse than Y, behave in relevantly different ways from relations of a quantitative - maybe 'non-experiential' might have been a better word - nature, like the relation of X being heavier than Y. As I tried to explain, the "experientially-worse-than" relation is impacted by whether the X (e.g. 5 minor headaches) are spread across 5 people or all had by one person, whereas the "heavier-than" relation is not impacted by whether X (e.g. 100 tons) are spread across 5 objects or true of 1 object.

3) "Yes, but the question here is whether 5 what-it's-lies-of-going-through-1-minor-headache is 5x worse than 1 minor headache. We can believe this moral claim without believing that the phenomenon of 5 separate headaches is phenomenally equivalent to 1 experience of 5 headaches. There are lots of cases where A is morally equivalent to B even though A and B are physically or phenomenally different."

The moral question here is whether a case in which 5 minor headaches are all had by one person is morally equivalent (i.e. morally just as bad) as a case in which 5 minor headaches are spread across 5 people. You think it is, and I think it isn't. Instead, I think the former case is morally worse than the latter case.

And the ONLY reason why I think this is because I think 5 headaches all had by one person is experientially worse than 5 headaches spread across 5 people. As I said before, I think experience is the only morally relevant factor.

Since I don't think anything other than experience matters, I would deny the existence of cases in which A and B are morally just as bad/good where A and B differ phenomenally.

Comment author: kbog  (EA Profile) 24 March 2018 09:11:42PM *  0 points [-]

I disagree. I was precisely trying to guard against such thoughts by enriching my first reply to Michael_S with a case of forgetfulness. I wrote, "Now, by the end of our 5th minor headache, we might have long forgotten about the first minor headache because, say, it happened so long ago. So, by the end of our 5th minor headache, we might not have an accurate appreciation of what it’s like to go through 5 minor headaches EVEN THOUGH we in fact have experienced what it’s like to go through 5 minor headaches." (I added the caps here for emphasis)

But I don't have an accurate appreciation of what it's like to be 5 people going through 5 headaches either. So I'm missing out on just as much as the amnesiac. In both cases people's perceptions are inaccurate.

My point wasn't that we can't quantify experience in various ways, but that relations of an experiential nature, like the relation of X being experientially worse than Y, behave in relevantly different ways from relations of a quantitative - maybe 'non-experiential' might have been a better word - nature, like the relation of X being heavier than Y. As I tried to explain, the "experientially-worse-than" relation is impacted by whether the X (e.g. 5 minor headaches) are spread across 5 people or all had by one person, whereas the "heavier-than" relation is not impacted by whether X (e.g. 100 tons) are spread across 5 objects or true of 1 object

Of course you can define a relation to have that property, but merely defining it that way gives us no reason to think that it should be the focus of our moral concern.

If I were to define a relation to have the property of being the target of our moral concern, it wouldn't be impacted by how it were spread across multiple people.

As I said before, I think experience is the only morally relevant factor.

Well, so do I. The point is that the mere fact that 5 headaches in one person is worse for one person doesn't necessarily imply that it is worse overall for 5 headaches among 5 people.

Comment author: Jeffhe  (EA Profile) 27 March 2018 08:10:28PM *  0 points [-]

Hi kbog, glad to hear back from you.

1) "But I don't have an accurate appreciation of what it's like to be 5 people going through 5 headaches either. So I'm missing out on just as much as the amnesiac. In both cases people's perceptions are inaccurate."

I don't quite understand how this is a response to what I said, so let me retrace some things:

You first claimed that if I believed that 5 minor headaches all had by one person is experientially worse than 5 minor headaches spread across 5 people, then I would be committed to "believing that it doesn't matter how many times you are tortured if your memory is wiped each time. Because you will never have the experience of being tortured a second time" and this is a problem.

I replied that it does matter how many times I get tortured because even if my memory is wiped each time, it is still ME (as opposed to a numerically different subject-of-experience, e.g. you) who would experience torture again and again. If my memory is wiped, I will incorrectly VIEW each additional episode of torture as the first one I've ever experienced, but it would not BE the first one I've ever experienced. I would still experience what-it's-like-of-going-through-x-number-of-torture-episodes even if after each episode, my memory was wiped. Since it's the what-it's-like-of-going-through-x-number-of-torture-episodes (and not my memory of it) that is experientially worse than something else, and since X is morally worse than Y when X is experientially worse (i.e. involves more pain) than Y, therefore, it does matter how many times I'm tortured irrespective of my memory.

Now, the fact that you said that I "will never have the experience of being tortured a second time" suggests that you think that memory-continuity is necessary to being the numerically same subject-of-experience (i.e. person). If this were true, then every time a person's memory is wiped, a numerically different person comes into existence and so no person would experience what-it's-like-of-going-through-2-torture-episodes if a memory wipe happens after each torture episode. But I don't think memory-continuity is necessary to being the numerically same subject-of-experience. I think a subject-of-experience at time t1 (call this subject "S1") and a subject-of-experience at some later time t2 (call this subject "S2") are numerically identical (though perhaps qualitatively different) just in case an experience at t1 (call this experience E1) and an experience at t2 (call this experience E2) are both felt by S1. In other words, I think S1 = S2 iff E1 and E2 are both felt by S1. S1 may have forgotten about E1 by t2 (due to a memory wipe), but that doesn't mean it wasn't S1 who also felt E2.

In a nutshell, memory (and thus how accurate we appreciate our past pains) is not morally relevant since it does not prevent a person from actually experiencing what-it's-like-of-going-through-multiple-pains, and it is this latter thing that is morally relevant. So I don't quite see the point of your latest reply.

2) "Of course you can define a relation to have that property, but merely defining it that way gives us no reason to think that it should be the focus of our moral concern.

If I were to define a relation to have the property of being the target of our moral concern, it wouldn't be impacted by how it were spread across multiple people."

I am not simply defining a relation here. We both agree that experience is morally relevant and that therefore pain is morally bad, and that therefore an outcome that involves more pain than another outcome is morally worse than the latter outcome. That is, we agree X is morally worse than Y iff X involves more pain than Y. But how are we to understand phrase 'involves more pain than'? I understand it as meaning "is experientially worse than", which is why I ultimately think that 5 minor headaches all had by one person is morally worse than 5 minor headaches spread across 5 people. You seem to agree with me that the former is experientially worse than the latter, yet you deny that the former is morally worse than the latter. Thus, you have to offer another plausible account of the phrase 'involves more pain than' on which 5 minor headaches all had by one person involves just as much pain as 5 minor headaches spread across 5 people. IMPORTANTLY, this account has to be one according to which 5 minor headaches all had by one person can involve more pain than 1 major headache and not merely in an experientially worse sense. Can you offer such an account?

I mean, how can 5 minor headaches all had by one person involve more pain than 1 major headache if not in an experientially worse sense? You might try to use math to help illustrate your point of view. You might say, well suppose each minor headache represents a pain of a magnitude of 2, and the major headache represents a pain of a magnitude of 6. You might further clarify that the 2 doesn't just signify the INTENSITY of the minor pain since how shitty a pain episode is doesn't just depend on its intensity but also on its duration. Thus, you might clarify that the 2 represents the overall shitness of the pain - the disutility of it, so to speak. Next, you might say that insofar as there are 5 such minor headaches, they represent 10 disutility, and 10 is bigger than 6. Therefore 5 minor headaches all had by one person involves more pain than a major headache.

But then I would ask you: what is the reality underpinning the number 10? Is it not some overall shittiness that is experientially worse than the overall shittiness from experiencing one major headache? Is it not the overall shittiness of what-it's-like-of-going-through-5-minor-headaches? If it is, then we haven't departed from my "is experientially worse than" interpretation of 'involves more pain than'. If it isn't, then what is it?

To see the problem even more clearly, consider when the 5 minor headaches are spread across 5 people. Here again, you will say that the 5 minor headaches represent 10 disutility and 10 is greater than 6, therefore 5 minor headaches spread across 5 people involve more pain than one major headache. This conclusion is easy to arrive at when one just focuses on the math: 2 x 5 = 10 and 10 > 6. But we must not forget to ask ourselves what the "10" might signify in reality. Is it meant to signify an overall shittiness that is shittier than the experience of 1 major headache? Ok, but where in reality is this overall shittiness? I certainly don't see it. I don't see the presence of this overall shittiness because there is no experience of it.

(Thus, I find using math to show that 5 minor headaches spread across 5 people involves more pain than 1 major headache is very misleading: yes, mathematically, you can easily portray it. But, at bottom, the '10' maps onto nothing in reality.)

So in conclusion, I don't see any other plausible interpretation of 'involves more pain than' than "is experientially worse than". If that is the case, then not only is it the case that I haven't arbitrarily defined a relation, but it's also the case that this relation is the only plausible morally relevant relation.

3) "Well, so do I. The point is that the mere fact that 5 headaches in one person is worse for one person doesn't necessarily imply that it is worse overall for 5 headaches among 5 people."

We need to distinguish between experientially worse and morally worse. You agree that 5 headaches in one person is experientially worse than 5 headaches spread across 5 people, yet you insist that that doesn't mean the former is morally worse than the latter. Well, again, this requires you to show that there is another plausible interpretation of 'involves more pain than' on which the former involves just as much pain as the latter.

Also, I should note that I was too hasty when I said that I think experience is the ONLY morally relevant factor. Actually, I also think who suffers is a morally relevant factor, but that doesn't affect our discussion here.

Comment author: kbog  (EA Profile) 28 March 2018 01:51:58AM 0 points [-]

In a nutshell, memory (and thus how accurate we appreciate our past pains) is not morally relevant since it does not prevent a person from actually experiencing what-it's-like-of-going-through-multiple-pains, and it is this latter thing that is morally relevant. So I don't quite see the point of your latest reply.

The point is that the subject has the same experiences as that of having one headache five times, and therefore has the same experiences as five headaches among five people. There isn't any morally relevant difference between these experiences, as the mere fact that the latter happens to be split among five people isn't morally relevant. So we should suppose that they are morally similar.

But how are we to understand phrase 'involves more pain than'?

You think it should be "involves more pain for one person than". But I think it should be "involves more pain total", or in other words I take your metric, evaluate each person separately with your metric, and add up the resulting numbers.

Thus, you have to offer another plausible account of the phrase 'involves more pain than' on which 5 minor headaches all had by one person involves just as much pain as 5 minor headaches spread across 5 people.

It's just plain old cardinal utility: the sum of the amount of pain experienced by each person.

IMPORTANTLY, this account has to be one according to which 5 minor headaches all had by one person can involve more pain than 1 major headache and not merely in an experientially worse sense

Why?

I mean, how can 5 minor headaches all had by one person involve more pain than 1 major headache if not in an experientially worse sense?

In the exact same way that you think they can.

then we haven't departed from my "is experientially worse than" interpretation of 'involves more pain than'.

Correct, we haven't, because we're not yet doing any interpersonal comparisons.

But we must not forget to ask ourselves what the "10" might signify in reality. Is it meant to signify an overall shittiness that is shittier than the experience of 1 major headache? Ok, but where in reality is this overall shittiness?

It is distributed - 20% of it is in each of the 5 people who are in pain.

Comment author: Jeffhe  (EA Profile) 28 March 2018 03:46:43AM *  0 points [-]

1) "The point is that the subject has the same experiences as that of having one headache five times, and therefore has the same experiences as five headaches among five people."

One subject-of-experience having one headache five times = the experience of what-it's-like-of-going-through-5-headaches. (Note that the symbol is an equal sign in case it's hard to see.)

Five headaches among five people = 5 experientially independent experiences of what-it's-like-of-going-through-1-headache. (Note the 5 experiences are experientially independent of each other because each is felt by a numerically different subject-of-experience, rather than all by one subject-of-experience.)

The single subject-of-experience does not "therefore has the same experiences as five headaches among five people."

2) "You think it should be "involves more pain for one person than". But I think it should be "involves more pain total", or in other words I take your metric, evaluate each person separately with your metric, and add up the resulting numbers."

Ok, and after adding up the numbers, what does the final resulting number refer to in reality? And in what sense does the referent (i.e. the thing referred to) involve more pain than a major headache?

Consider the case in which the 5 minor headaches are spread across 5 people, and suppose each minor headache has an overall shittiness score of 2 and a major headache has an overall shittiness score of 6. If I asked you what '2' refers to, you'd easily answer the shitty feeling characteristic of what it's like to go through a minor-headache. And you would say something analogous for '6' if I asked you what it refers to.

You then add up the five '2's and get 10. Ok, now, what does the '10' refer to? You cannot answer the shitty feeling characteristic of what it's like to go through 5 minor headaches, for this what-it's-like is not present since no individual feels all 5 headaches. The only what-it's-like that is present are 5 experientially independent what-it's-like-of-going-through-1-minor-headache. Ok so what does '10' refer to? 5 of these shitty feelings? Ok, and in what sense do 5 of these shitty feelings involve more pain than 1 major headache? Clearly not in an experiential sense for only the what-it's-like-of-going-through-5-minor-headaches is plausibly experientially worse than a major headache. So in what sense does the referent involve more pain than a major headache?

THIS IS THE CRUX OF OUR DISAGREEMENT. I CANNOT SEE HOW 5 what-it's-like-of-going-through-1-minor-headache involves more pain than 1 major headache. YES, mathematically, you can show me '10 > 6' all day long, but I don't see any reality onto which it maps!

3) "It's just plain old cardinal utility: the sum of the amount of pain experienced by each person."

Yes, but I don't see how that "sum of pain" can involve more pain than 1 major headache because what that "sum of pain" is, ultimately speaking, are 5 what-it's-likes-of-going-through-1-minor-pain, and NOT 1 what-it's-like-of-going-through-5-minor-pains.

4) "Why?"

Because ultimately you'll need an account of 'involves more pain than' on which 5 minor headaches spread across 5 people can involve more pain than 1 major headache. And in that situation, it is clearly the case that the 5 minor headaches are not experientially worse than the 1 major headache (for only the what-it's-like-of-going-through-5-minor-headaches can plausibly be experientially worse than 1 major headache).

My point was just that you'll need an account of 'involves more pain than' that can make sense of how 5 experientially independent what-it's-likes-of-going-through-1-minor-headache can involve more pain than 1 major headache, for my account (i.e. "is experientially worse than") certainly cannot make sense of it.

5) "It is distributed - 20% of it is in each of the 5 people who are in pain."

But when it's distributed, you won't have an overall shittiness that is shittier than the experience of 1 major headache, at least not when we understand "is shittier than" as meaning "is experientially worse than". For 5 experientially independent what-it's-likes-of-going-through-1-minor-headache are not experientially worse than 1 major headache: only the what-it's-like-of-going-through-5-minor-headaches can plausibly be experientially worse than 1 major headache.

Your task, again, is to provide a different account of 'involves more pain than' or 'shittier than' on which, somehow, 5 experientially independent what-it's-likes-of-going-through-1-minor-headache can somehow involve more pain than 1 major headache.

Comment author: JanBrauner 13 March 2018 09:02:02AM 5 points [-]

You think aggregating welfare between individuals is a flawed approach, such that you are indifferent between alleviating an equal amount of suffering for 1 or each of a million people.

You conclude that these values recommend giving to charities that directly address the sources of most intense individual suffering, and that between them, one should not choose by cost-effectiveness, but randomly. One should not give to say GiveDirectly, which does not directly tackle the most intense suffering.

This conclusion seems correct only for clear-cut textbook examples. In the real world, I think, your values fail to recommend anything. You can never know for certain how many people you are going help. Everything is probabilities and expected value:

Say, for the sake of the argument, you think that severe depression is the cause of most intense individual suffering. You could give your $10.000 to a mental health charity, and they will in expectation prevent 100 people (made up number) from getting severe depression.

However, if you give $10.000 to GiveDirectly, certainly that will affect they recipients strongly, and maybe in expectation prevent 0.1 cases of severe depression.

Actually, if you take your $10.000, and buy that sweet, sweet Rolex with it, there is a tiny chance that this will prevent the jewelry store owner from going bankrupt, being dumped by their partner and, well, developing severe depression. $10.000 to the jeweller prevent an expected 0.0001 cases of severe depression.

So, given your values, you should be indifferent between those.

Even worse, all three actions also harbour tiny chances of causing severe depression. Even the mental health charity, for every 100 patients they prevent from developing depression, will maybe cause depression in 1 patient (because interventions sometimse have adverse effects, ...). So if you decide between burning the money or giving it to the mental health charity, you decide between preventing 100 or 1 episodes of depression. An decision that you are, given your stated values, indifferent between.

Further arguments why approaches that try to avoid interpersonal welfare aggregation fail in the real world can be found here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1781092

Comment author: Jeffhe  (EA Profile) 13 March 2018 11:41:04PM *  0 points [-]

Hi Jan,

Thanks a lot for your response.

I wonder if it is too big of a concession to make to say that "This conclusion seems correct only for clear-cut textbook examples." My argument against effective altruism was an attempt to show that it is theoretically/fundamentally flawed, even if (per your objection) I can't criticize the actual pattern of donation it is responsible for (e.g. pushing a lot of funding to GiveDirectly), although I will offer a response to your objection.

I remember listening to a podcast featuring professor MacAskill (one of the presumed founders of EA) where he was recounting a debate he had with someone (can't remember who). That someone raised (if I remember correctly) the following objection: If there was a burning house and you could either save the boy trapped inside or a painting hanging on the wall which you could sell and use that money to save 100 kids in a third world country from a similar pain that the boy would face, you should obviously save the boy. But EA says to save the painting. Therefore EA is false. Professor MacAskill's response (if I remember correctly) was to bite the bullet and say that while it might be hard to stomach, that is really what we should do.

If professor MacAskill's view represents EA's position, then I assume that if you concede that we should flip a coin in such a case, then there is an issue.

Regarding whether my argument recommends anything in the real world, I think it does.

First, just to be clear, since we cannot give each person a chance of being helped that is proportionate to what they have to suffer, I said that I personally would choose to use my money to help anyone among the class of people who stands to suffer the most (see Section F.). Just to be clear, I wouldn't try to give each of the people among this class an equal chance because that is equally impossible. I would simply choose to help those who I come across or know about I guess. Note that I didn't explain why I would choose to help this class of people, but the reason is simply that were it possible to give each person a chance of being helped proportional to their suffering, those who stand to suffer the most have the highest chance of winning. (I have since updated the post to include this explanation, thanks.)

I think, now that I have clarified my position, it should be clear that my way of things can recommend actions. There are many opportunities where donating almost certainly prevents or alleviates a certain extreme suffering to someone. Maybe depression is not one of those cases, but I would imagine that severe malnutrition is very painful. So is torture (which oftentimes can be prevented if a ransom is paid). Since the pattern of donation that EA promotes is likely very different from the pattern of donation that arises from my way of things, my way of things provides a real alternative practically speaking (but maybe up to a limit before the patterns of donations would converge).

Btw, I would not be absolutely against giving to GiveDirectly if there is a statistically good chance that they will prevent or alleviate at least one person from one of the worst kinds of suffering AND there wasn't any other cheaper practical way to help that very person (which is likely the case because we don't even know who that person is). However, I would personally donate to charities where there is a near certainty of prevention of alleviation simply because at the end of the day my donation actually helped someone, whereas a statistically good chance may not pan out, in which case I haven't helped the worst off

Yes, by doing so, I perhaps end up allowing someone to suffer in one of the worst ways who otherwise wouldn't have suffered had I (and everyone else) given to GiveDirectly. But, as I made more clear in Section F., there is no way to give each person an appropriate chance of being helped, not even if we just considered those people who stand to suffer the worst. And so, at the end of the day, I am forced to make a choice to help a particular person anyways.

Comment author: Evan_Gaensbauer 14 March 2018 12:33:18AM 3 points [-]

If you think PETA is the best bet for reducing suffering, you might want to check out other farm animal advocacy organizations at Animal Charity Evaluators' website. The Organization to Prevent Intense Suffering (OPIS) is an EA-aligned organization which has a more explicit focus on advancing projects which directly mitigate abject and concrete suffering. You might also be interested in their work.

Comment author: Jeffhe  (EA Profile) 14 March 2018 12:54:52AM *  1 point [-]

Wow, their name says it all. I didn't know about OPIS - I'll definitely check them out. Will potentially be very useful for my own charitable activities.

Also, thanks for the link to Animal Charity Evaluators - didn't know about them either. Although, given that the numbers don't matter to me in trade off cases, I don't know if it will make a difference. It would if it showed me that donating to another animal charity would help the EXACT same animals I'd help via donating to PETA AND then some (i.e. even more animals). If donating to another animal charity helped different animals (e.g. a different cow than a cow I would have helped by donating to PETA), then even if I can help more animals by donating to this other charity, I would have no overwhelming reason to, because the cow who I would thereby be neglecting would end up suffering no less than any one of the other animals otherwise would, and as I argued in response to Objection 2, who suffers matters.

Thanks for both suggestions though, Evan!

Note, I have since removed PETA from my post because the point of my post was just to question EA and not to suggest charities to donate to. Thanks for making me realize this.

Comment author: Telofy  (EA Profile) 13 March 2018 07:20:08PM 3 points [-]

I think Brian Tomasik has addressed this briefly and Nick Bostrom at greater length.

What I’ve found most convincing (quoting myself in response to a case that hinged on the similarity of the two or many experiences):

If you don’t care much more about several very similar beings suffering than one of them suffering, then you would also not care more about them, when they’re your own person moments, right? You’re extremely similar to your version a month or several months ago, probably more similar than you are to any other person in the whole world. So if you’re suffering for just a moment, it would be no better than being suffering for an hour, a day, a month, or any longer multiple of that moment. And if you’ve been happy for just a moment sufficiently recently, then close to nothing more can be done for you for a long time.

I imagine that fundamental things like that are up to the subjectivity of moral feelings – so close to the axioms, it’s hard to argue with even more fundamental axioms. But I for one have trouble empathizing with a nonaggregative axiology at least.

Comment author: Jeffhe  (EA Profile) 13 March 2018 10:43:44PM *  0 points [-]

Hi Telofy,

Thanks for your comment, and quoting oneself is always cool (haha)/

In response, if I understand you correctly, you are saying that if I don't prefer saving many similar, though distinct, people each from a certain pain than another person from the same pain, then I have no reason to prefer saving myself from many of those pains than just one of them.

I certainly wouldn't agree with that. Were I to suffer many pains, I (just me) suffers all of them in such a way that there is a very clear sense how they, cumulatively, are worse to endure than just one of them. Thus, I find intra-personal aggregation of pains intelligible. I mean, when an old man reminiscing about his past says to us, "The single worst pain I had was that one time when I got shot in the foot, but if you asked me whether I'd go through that again or all those damn'ed headaches I had over my life, I would certainly ask for the bullet.", we get it. Anyways, I think the clear sense I mentioned supports the intra-personal aggregation of pains and if pains intra-personally aggregate, then more instances of the same pain will be worse than just one instance, and so I have reason to prefer saving myself from more of them.

However, in the case of many vs one other (call him "C"), the pains are spread across distinct people rather than aggregate in one person, so they cannot in the same sense be worse than the pain that C goes through. And so even if I show no preference in this case, I still have reason to show preference in the former case.

Comment author: Telofy  (EA Profile) 15 March 2018 12:32:26PM 0 points [-]

Okay, curious. What is to you a “clear experiential sense” is just as clear or unclear to me no matter whether I think about the person moments of the same person or of different people.

It would be interesting if there’s some systematic correlation between cultural aspects and someone’s moral intuitions on this issue – say, more collectivist culture leading to more strongly discounted aggregation and more individualist culture leading to more linear aggregation… or something of the sort. The other person I know who has this intuition is from a eastern European country, hence that hypothesis.

Comment author: Jeffhe  (EA Profile) 16 March 2018 03:36:34AM *  0 points [-]

Imagine you have 5 headaches, each 1 minutes long, that occur just 10 seconds apart of each other. From imagining this, you will have an imagined sense of what it's like to go through those 5 headaches.

And, of course, you can imagine yourself in the shoes of 5 different friends, who we can suppose each has a single 1-minute long headache of the same kind as above. From imagining this, you will again have an imagined sense of what it's like to go through 5 headaches.

If that's what you mean when you say that "the clear experiential sense is just as clear or unclear to me no matter whether I think about the person moments of the same person or of different people", then I agree.

But when you imagine yourself in the shoes of those 5 friends, what is going on is that one subject-of-experience (i.e. you), takes on the independent what-it's-likes (i.e. experiences) associated with your 5 friends, and IN DOING SO, LINKS THOSE what-it's-likes - which in reality would be experimentally independent of each other - TOGETHER IN YOU. So ultimately, when you imagine yourself in the shoes of your 5 friends, you are, in effect, imagining what it's like to go through 5 headaches. But in reality, there would be no such what-it's-like among your 5 friends. The only what-it's-like that would be present would be the what-it's-like-of-going-through-1-headache, which each of your friend would experience. No one would experience the what it's like of going through 5 headaches. But that is what is needed for it to be the case that 5 such headaches can be worse than a headache that is worse than any one of them.

Please refer to my conversation with Michael_S for more info.

Comment author: Telofy  (EA Profile) 16 March 2018 10:43:32PM 1 point [-]

Argh, sorry, I haven’t had time to read through the other conversation yet, but to clarify, my prior was the other one – not that there is something linking the experiences of the five people but that there is very little, and nothing that seems very morally relevant – that links the experiences of the one person. Generally, people talk about continuity, intentions, and memories linking the person moments of a person such that we think of them as the same one even though all the atoms of their bodies may’ve been exchanged for different ones.

In your first reply to Michael, you indicate that the third one, memories, is important to you, but in themselves I don’t feel that they confer moral importance in this sense. What you mean, though, may be that five repeated headaches are more than five times as bad as one because of some sort of exhaustion or exasperation that sets in. I certainly feel that, in my case especially with itches, and I think I’ve read that some estimates of DALY disability weights also take that into account.

But I model that as some sort of ability of a person to “bear” some suffering, which gets worn down over time by repeated suffering without sufficient recovery in between or by too extreme suffering. That leads to a threshold that makes suffering below and above seem morally very different to me. (But I recognize several such thresholds in my moral intuitions, so I seem to be some sort of multilevel prioritarian.)

So when I imagine what it is like to suffer headaches as bad as five people suffering one headache each, I imagine them far apart with plenty of time to recover, no regularity to them, etc. I’ve had more than five headaches in my life but no connection and nothing pathological, so I don’t even need to rely on my imagination. (Having five attacks of a frequently recurring migraine must be noticeably worse.)

Comment author: Jeffhe  (EA Profile) 17 March 2018 02:12:42AM *  0 points [-]

Hi Telofy,

Thanks for this lucid reply. It has made me realize that it was a mistake to use the phrase "clear experiential sense" because that misleads people into thinking that I am referring to some singular experience (e.g. some feeling of exhaustion that sets in after the final headache). In light of this issue, I have written a "new" first reply to Michael_S to try to make my position clearer. I think you will find it helpful. Moreover, if you find any part of it unclear, please do let me know.

What I'm about to say overlaps with some of the content in my "new" reply to Michael_S:

You write that you don't see anything morally relevant linking the person moments of a single person. Are you concluding from this that there is not actually a single subject-of-experience who feels, say, 5 pains over time (even though we talk as if there is)? Or, are you concluding from this that even if there is actually just a single subject-of-experience who feels all 5 pains over time, it is morally no different from 5 subjects-of-experience who each feels 1 pain of the same sort?

What matters to me at the end of the day is whether there is a single subject-of-experience who extends through time and thus is the particular subject who feels all 5 pains. If there is, then this subject experiences what it's like of going through 5 pains (since, in fact, this subject has gone through 5 pains, whether he remembers going through them or not). Importantly, the what-it's-like-of-going-through-5-pains is just the collection of the past 5 singular pain episodes, not some singular/continuous experience like an feeling of exhaustion or some super intense pain from the synthesis of the intensity of the 5 past pains. It is this what-it's-like that can plausibly be worse than the what it's like of going through a major pain. Since there could only be this what-it's-like when there is a single subject who experiences all 5 pains, therefore 5 pains spread across 5 people cannot be worse than a major pain (since, at best, there would only be 5 experientially independent what-it's-like-of-going-through-1-minor-headache).

My latest reply to Michael_S focuses on the question whether there could be a single subject-of-experience who extends through time, and thus capable of feeling multiple pains.

Comment author: Telofy  (EA Profile) 25 March 2018 03:05:07PM *  1 point [-]

Hi Jeff!

To just briefly answer your question, “Are you concluding from this that there is not actually a single subject-of-experience”: I don’t have an intuition for what a subject-of-experience is – if it is something defined along the lines of the three characteristics of continuous person moments from my previous message, then I feel that it is meaningful but not morally relevant, but if it is defined along the lines of some sort of person essentialism then I don’t believe it exists on Occam’s razor grounds. (For the same reason, I also think that reincarnation is metaphysically meaningless because I think there is no essence to a person or a person moment besides their physical body* until shown otherwise.)

* This is imprecise but I hope it’s clear what I mean. People are also defined by their environment, culture, and whatnot.

Comment author: Jeffhe  (EA Profile) 27 March 2018 09:47:00PM *  0 points [-]

Hi Telofy, nice to hear from you again :)

You say that you have no intuition for what a subject-of-experience is. So let me say two things that might make it more obvious:

1.Here is how I defined a subject-of-experience in my exchange with Michael_S:

"A subject of experience is just something which "enjoys" or has experience(s), whether that be certain visual experiences, pain experiences, emotional experiences, etc... In other words, a subject of experience is just something for whom there is a "what-it's-like". A building, a rock or a plant is not a subject of experience because it has no experience(s). That is why we don't feel concerned when we step on grass: it doesn't feel pain or feel anything. On the other hand, a cow is a subject-of-experience: it presumably has visual experiences and pain experience and all sorts of other experiences. Or more technically, a subject-of-experience (or multiple) may be realized by a cow's physical system (i.e. brain). There would be a single subject-of-experience if all the experiences realized by the cow's physical system are felt by a single subject. Of course, it is possible that within the cow's physical system's life span, multiple subjects-of-experience are realized. This would be the case if not all of the experiences realized by the cow's physical system are felt by a single subject."

I later enriched the definition a bit as follows: "A subject-of-experience is a thing that has, OR IS CAPABLE OF HAVING, experience(s). I add the phrase 'or is capable of having' this time because it has just occurred to me that when I am in dreamless sleep, I have no experiences whatsoever, yet I'd like to think that I am still around - i.e. that the particular subject-of-experience that I am is still around. However, it's also possible that a subject-of-experience exists only when it is experiencing something. If that is true, then the subject-of-experience that I am is going out of and coming into existence several times a night. That's spooky, but perhaps true."

2.Having offered a definition to Michael, I then say to him here is WHAT MAKES a particular subject-of-experience the numerically same subject-of-experience over time:

"Within any given physical system that can realize subjects of experience (e.g. a cow's brain), a subject-of-experience at time t-1 (call this subject "S1") is numerically identical to a subjective-of-experience at some later time t-2 (call this subject "S2") if and only if an experience at t-1 (call this experience "E1") and an experience at t-2 (call this experience "E2") are both felt by S1. That is S1 = S2 iff S1 feels E1 and E2."

Let me just add: A particular subject-of-experience can obviously be qualitatively different over time, which would happen when his personality changes or memory changes (or is erased) etc... But that doesn't imply there is any numerical difference. I assume the distinction between numerical identity and qualitative identity is a familiar one to you. In any case, here is an example to illustrate the distinction: Two perfectly matching coins are qualitatively the same, yet they are numerically distinct insofar as they are not one and the same coin.

I hope what I have said here helps!

Comment author: gworley3  (EA Profile) 13 March 2018 07:19:39PM 3 points [-]

I think you are conflating EA with utilitarianism/consequentialism. To be fair this is totally understandable since many EAs are consequentialists and consequentialist EAs may not be careful to make or even see such a distinction, but as someone who is closest to being a virtue ethicist (although my actual metaethics are way more complicated) I see EA as being mainly about intentionally focusing on effectiveness rather than just doing what feels good in our altruistic endeavors.

Comment author: Jeffhe  (EA Profile) 19 March 2018 06:15:45PM 0 points [-]

Hey gworley3,

Here's the comment I made about the difference between effective-altruism and utilitarianism (if you're interested): http://effective-altruism.com/ea/1ll/cognitive_and_emotional_barriers_to_eas_growth/dij

Comment author: Jeffhe  (EA Profile) 14 March 2018 12:00:32AM 0 points [-]

Hi gworley3,

Thanks for your comment.

I don't think I'm conflating EA with utilitarianism. In fact, I made a comment a few days ago specifically pointing out how they might differ under the post "Cognitive and emotional barriers to EA's growth". If you still think I'm conflating things, please point out what in specific so I can address it. Thanks.

Comment author: kbog  (EA Profile) 30 March 2018 06:11:10AM *  0 points [-]

That EA and utilitarianism are different is precisely the point being made here: you have given an argument against utilitarianism, but EA is not utilitarianism, so the argument wouldn't demonstrate that EA is flawed.

Comment author: Jeffhe  (EA Profile) 31 March 2018 02:09:01AM 0 points [-]

Only my response to Objection 1 is more or less directed to the utilitarian. My response to Objection 2 is meant to defend against other justifications for saving the greater number, such as leximin or cancelling strategies. In any case, I think most EAs (even the non-utilitarians) will appeal to utilitarian reasoning to justify saving the greater number, so addressing utilitarian reasoning is important.

Comment author: kbog  (EA Profile) 31 March 2018 12:31:35PM 0 points [-]

It's not about responses to objections, it's about the thesis itself.

Comment author: RandomEA 13 March 2018 03:52:45AM *  2 points [-]

I used to think that a large benefit to a single person was always more important than a smaller benefit to multiple people (no matter how many people experienced the smaller benefit). That's why I wrote this post asking others for counterarguments. After reading the comments on that post (one of which linked to this article), I became persuaded that I was wrong.

Here's an additional counterargument. Let's say that I have two choices:

A. I can save 1 person from a disease that decreases her quality of life by 95%; or

B. I can save 5 people from a disease that decreases their quality of life by 90%.

My intuition is that it is better to save the 5. Now let's say I get presented with a second dilemma:

B. I can save 5 people from a disease that decreases their quality of life by 90%; or

C. I can save 25 people from a disease that decreases their quality of life by 85%.

My intuition is that it is better to save the 25. Now let's say I get presented with a third dilemma.

C. I can save 25 people from a disease that decreases their quality of life by 85%; or

D. I can save 125 people from a disease that decreases their quality of life by 80%.

My intuition is that it is better to save the 125. This cycle continues until the seventeenth dilemma:

Q. I can save 152,587,890,625 people from a disease that decreases their quality of life by 15%; or

R. I can save 762,939,453,125 people from a disease that decreases their quality of life by 10%.

My intuition is that it is better to save the 762,939,453,125.

Since I prefer R over Q and Q over P and P over O and so on and so forth all the way through preferring C over B and B over A, it follows that I should prefer R over A.

In other words, our intuition that providing a large benefit to one person is less important than providing a slightly smaller benefit to several people conflicts with our intuition that providing a very large benefit to one person is more important than providing a very small benefit to an extremely large number of people. Given scope insensitivity, I think the former intuition is probably more reliable.

One last point. I think that EA has a role even under your worldview. It can help identify the worst possible forms of suffering (such as being boiled alive at a slaughterhouse) and the most effective ways to prevent that suffering.

Comment author: Jeffhe  (EA Profile) 14 March 2018 12:25:00AM *  0 points [-]

Hi RandomEA,

First of all, awesome name! And secondly, thanks for your response.

My view is that we should give each person a chance of being helped that is proportionate to what they each have to suffer. It is irrelevant to me how many people there are who stand to suffer the lesser pain. So, for example, in the first choice situation you described, my intuition is to give the single person roughly slightly over a 50% chance of being saved and the others slightly under 50% of being saved. This is because the single person would suffer slightly worse than any one of the others, so the single person gets a slightly higher chance. It is irrelevant to me how many people have 90% to lose in quality of life, whether it be 5 or 5 billion.

So if 760 billion people have 10% to lose where the single person has 90% to lose, my intuition is to give the single person roughly a 90% chance of being saved and the other 760 billion a 10% of being saved.

In my essay, I in effect argued that everyone would have this intuition if properly appreciated the following two facts: 1. That were the 760 billion people to suffer, none of them would suffer anywhere near the amount the single person would. Conversely, were the single person to suffer, he/she would suffer so much more than any one of the 760 billion. 2. Which individual suffers matters because it is the particular individual who suffers that bears all the suffering.

I assume that we should accept the intuitions that we have when we keep all the relevant facts at the forefront of our mind (i.e. when we properly appreciate them). I believe the intuitions I mentioned above (i.e. my intuitions) are the ones people would have when they do this.

Regarding your second point, I have to think a little more about it!

Comment author: RandomEA 14 March 2018 03:08:55AM *  0 points [-]

Let's say that you have $100,000,000,000,000.

For every $1,000,000,000,000 you spend on buying medicine A, the person in scenario A (from my previous comment) will have an additional 1% chance of being cured of disease A.

For every $200,000,000,000 you spend on buying medicine B, a person in scenario B (from my previous comment) will have an additional 1% chance of being cured of disease B.

For every $40,000,000,000 you spend on buying medicine C, a person in scenario C (from my previous comment) will have an additional 1% chance of being cured of disease C.

...

For every $1.31 you spend on buying medicine R, a person in scenario R (from my previous comment) will have an additional 1% chance of being cured of disease R.

Now consider a situation where you have to spend your $100,000,000,000,000 on helping one person with disease A and 5 people with disease B. Based on your response to my comment, it sounds like you would spend $51,355,000,000,000 on the person with disease A (giving her a 51.36% chance of survival) and $9,729,000,000,000 on each person with disease B (giving each of them a 48.64% chance of survival). Is that correct?

Next consider a situation where you have to spend your $100,000,000,000,000 on helping one person with disease A and 762,939,453,125 people with disease R. Based on your response to my comment, it sounds like you would spend $90,476,000,000,000 on the person with disease A (giving her a 90.48% chance of surviving) and $12.48 on each person with disease R (giving each of them a 9.53% chance of surviving). Is that correct?

Comment author: Jeffhe  (EA Profile) 14 March 2018 09:17:29PM 0 points [-]

The situations I focus on in my essay are trade-off choice situations, meaning that I can only choose one party to help, and not all parties to various degrees. Thus, if you have an objection to my argument, it is important that we focus on such kinds of situations. Thanks!

Comment author: RandomEA 14 March 2018 11:20:44PM 1 point [-]

Yes but the situations that EAs face are much more analogous to my second set of hypotheticals. So if you want your argument to serve as an objection to EA, I think you have to explain how it applies to those sorts of cases.

Comment author: Jeffhe  (EA Profile) 14 March 2018 11:49:39PM *  0 points [-]

Not true. Trade off situations are literally everywhere. Whenever you donate to some charity, it is at the expense of another charity working in a different area and thus at the expense of the people who the other charity would have helped. Even with malaria, if you donate to a certain charity, you are helping the people who that charity helps at the expense of other people that another charity against malaria helps. That's the reality.

And if you're thinking "Well, can't I donate some to each malaria fighting charity?", the answer is yes, but whatever money you donate to the other malaria fighting charity, it comes at the expense of helping the people who the original malaria fighting charity would have been able to help had they got all your donation and not just part of it. The trade off choice situation would be between either helping some of the people residing in the area of the other malaria fighting charity or helping some additional people residing in the area of the original malaria fighting charity. You cannot help all.

In principle, as long as one doesn't have enough money to help everyone, one will always find oneself in a trade off choice situation when deciding where to donate.

Comment author: RandomEA 15 March 2018 01:51:20AM *  0 points [-]

I think the second set of hypotheticals does involve trade-offs. When I say that a person has an additional 1% chance of being cured, I mean that they have an additional 1% chance of receiving a medicine that will definitely cure them. If you spend more money on medicines to distribute among people with disease Q (thus increasing the chance that any given person with disease Q will be cured), you will have less money to spend on medicines to distribute among people with disease R (thus decreasing the chance that any given person with disease R will be cured).

The reason I think that the second set of hypotheticals is more analogous to the situations EAs face is that there are typically already many funders in the space, meaning that potential beneficiaries often have some chance of being helped even absent your donation. It's quite rare that you choosing to fund one person over another will result in the other person having no chance at all of being helped.

Comment author: Jeffhe  (EA Profile) 15 March 2018 02:27:31AM *  0 points [-]

My apologies. After re-reading your second set of hypothetical, I think I can answer your questions.

In the original choice situation contained in my essay, the device I used to capture the amount of chance each group would be given of being helped was independent of the donation amount. For example, in the choice situation between Bob, Amy, and Susie, the donation was $10 and the device used to give each a 50% chance of being saved from a painful disease was a coin.

However, it seems like in your hypotheticals, the donation is used as the device too. That confused me at first. But yeah, at the end of the day, I would give person A a roughly 90% of being saved from his/her suffering and roughly a 10% to each of the billions of others, regardless of what the dollar breakdown would look like. So, if I understand your hypotheticals correctly, then my answer would be yes to both your original questions.

I don't however see the point of using the donation to also act as the device. It seems to unnecessarily over complicate the choice situations.

If your goal is to try to create a choice situation in which I have to give a vast amount of money to give person A around a 90% chance of surviving, and the objection you're thinking of raising is that it is absurd to give that much to give a single person around a 90% of being helped, then my response is:

1) Who suffers matters

2) What person A stands to suffer is far worse than what any one of the people from the competing group stands to suffer.

I think if we really appreciate those two facts, our intuition is to give person A 90% and each of the others a 10%, regardless of the $ breakdown that involves. Thanks.

Just noticed you expanded your comment. You write, "It's quite rare that you choosing to fund one person over another will result in the other person having no chance at all of being helped." This is not true. There will always be a person in line who isn't helped, but who would have been helped had you funded the charity working in his area. He may not be the first in line, but he is somewhere in the line waiting to be helped by that charity.

Comment author: RandomEA 15 March 2018 03:43:18PM 1 point [-]

Just noticed you expanded your comment. You write, "It's quite rare that you choosing to fund one person over another will result in the other person having no chance at all of being helped." This is not true. There will always be a person in line who isn't helped, but who would have been helped had you funded the charity working in his area. He may not be the first in line, but he is somewhere in the line waiting to be helped by that charity.

I was simply noting the difference between our two examples. In your example, Bob has no chance of receiving help if you choose the other person. In the real world, me choosing one charity over another will not cause a specific person to have no ex-ante chance of being helped. Instead, it means that each person in the potential beneficiary population has a lower chance of being helped. I wanted my situation to be more analogous to the real world because I want to see how your principle works in practice. It's the same reason I introduced different prices into the example.

Also, my comment was expanded very shortly after it was originally posted. It's possible that you saw the original one and while you were writing your response to it I posted my edit.

Comment author: Jeffhe  (EA Profile) 17 March 2018 05:24:13PM *  0 points [-]

Hey RandomEA,

Sorry for the late reply. Well, say I'm choosing between the World Food Programme (WFP) and some other charity, and I have $30 to donate. According to WFP, $30 can feed a person for a month (if I remember correctly). If I donate to the other charity, then WFP in its next operation will have $30 less to spend on food, meaning someone who otherwise would have been helped won't be receiving help. Who that person is, we don't know. All we know is that he is the person who was next in line, the first to be turned away.

Now, you disagree with this. Specifically you disagree that it could be said of any SPECIFIC person that, if I don't donate to WFP, that it would be true of THAT person that he won't end up receiving help that he otherwise would have. And this is because:

1) HE - that specific person - still had a chance of being helped by WFP even if I didn't donate the $30. For example, he might have gotten in line sooner than I'm supposing he has. And you will say that this holds true for ANY specific person. Therefore, the phrase "he won't end up receiving help" is not guaranteed.

2) Moreover, even if I do donate the $30 to WFP, there isn't any guarantee that he would be helped. For example, HE might have gotten in line way to late for an additional $30 to make a difference for him. And you will say that this holds true for ANY specific person. Therefore, the phrase "that he otherwise would have" is also not guaranteed.

In the end, you will say, all that can be true of any SPECIFIC person is that my donation of $30 would raise THAT person's chance of being helped.

Therefore, in the real world, you will say, there's rarely a trade-off choice situation between specific people.

I am tempted to agree with that, but two points:

1) There still seems to be a trade off choice situation between specific groups of people: i.e. the group helped by WFP and the group helped by the other charity.
2) I think, at least in refugee camps, there is already a list of all the refugees and a document specifying who in specific is next in line to receive a given service/aid. In these cases, we will be faced with a trade off choice situation between a specific individual (who we would be helping if we donated to the refugee camp) and whatever group of people that would be helped by donating to another charity. I wonder what percentage of real life situations are like this. Moreover, if you're looking for real life trade off situations between some specific person(s) and some other specific person or specific group, they are clearly not hard to find. For example, you can either help a specific homeless man vs whoever. Or you can help a specific person avoid torture by helping pay off a ransom vs whoever else by helping a charity. Or you can spend fund a specific person's cancer treatment vs whoever. Etc...

My overall point is that trade off situations of the kind I describe in my paper are very real and everywhere EVEN IF it is true that there are trade off situations of the nature you describe.

Thanks.

Comment author: Alex_Barry 29 March 2018 11:13:16PM *  1 point [-]

(Posted as top-level comment as I has some general things to say, was originally a response here)

I just wanted to say I thought this comment did a good job explaining the basis behind your moral intuitions, which I had not really felt a strong motivation for before now. I still don't find it particularly compelling myself, but I can understand why others could find it important.

Overall I find this post confusing though, since the framing seems to be 'Effective Altruism is making an intellectual mistake' whereas you just actually seem to have a different set of moral intuitions from those involved in EA, which are largely incompatible with effective altruism as it currently practiced. Whilst you could describe moral differences as intellectual mistakes, this does not seem to be a standard or especially helpful usage.

The comments etc. then just seem to have mostly been people explaining why they don't find your moral intuition that 'non-purely experientially determined' and 'purely experientially determined' amounts of pain cannot be compared compelling. Since we seem to have reached a point where there seems to be a fundamental disagreement about considered moral values, it does not seem that attempting to change each others minds is very fruitful.

I think I would have found this post more conceptually clear if it had been structured:

  1. EA conclusions actually require an additional moral assumption/axiom - and so if you don't agree with this assumption then you should not obviously follow EA advice.

  2. (Optionally) Why you find the moral assumption unconvincing/unlikely

  3. (Extra Optionally) Tentative suggestions for what should be done in the absence of the assumption.

Where throughout the assumption is the commensuratabilitly of 'non-purely experientially determined' and 'purely experientially determined' experience.

In general I am not very sure what you had in mind as the ideal outcome of this post. I'm surprised if you thought most EAs agreed with you on your moral intuition, since so much of EA is predicated on its converse (as is much of established consequential thinking etc.). But equally I am not sure what value we can especially bring to you if you feel very sure in your conviction that the assumption does not hold.

Comment author: kbog  (EA Profile) 30 March 2018 05:45:38AM *  0 points [-]

Little disagreement in philosophy comes down to a matter of bare differences in moral intuition. Sometimes people are just confused.

Comment author: Jeffhe  (EA Profile) 31 March 2018 01:55:42AM 1 point [-]

Hey Alex, thanks for your comment!

I didn't know what the source of my disagreement with EAs would be, so I hope you can understand why I couldn't structure my post in a way that would have already taken into account all the subsequent discussions. But thanks for your suggestion. I may write another post with a much simpler structure if my discussion with kbog reaches a point where either I realize I'm wrong or he realizes he's wrong. If I'm wrong, I hope to realize it asap.

Also, I agree with kbog. I think it's much likelier that one of us is just confused. Either kbog is right that there is an intelligible sense in which 5 minor headaches spread among 5 people can involve more pain than 1 major headache had by one person or he is not.

After figuring that out, there is the question of which sense of "involves more pain than" is more morally important: is it the "is experientially worse than" sense or kbog's sense? Perhaps that comes down to intuitions.

Comment author: Alex_Barry 31 March 2018 04:08:00PM 0 points [-]

Thanks for your reply - I'm extremely confused if you think there is no 'intelligible sense in which 5 minor headaches spread among 5 people can involve more pain than 1 major headache had by one person" since (as has been discussed in these comments) if you view/define total pain as being measured by intensity-weighted number of experiences this gives a clear metric that matches consequentialist usage.

I had assumed you were arguing at the 'which is morally important' level, which I think might well come down to intuitions.

I hope you manage to work it out with kblog!

Comment author: Jeffhe  (EA Profile) 10 April 2018 09:14:32PM 1 point [-]

Hey Alex,

Thanks for your reply. I can understand why you'd be extremely confused because I think I was in error to deny the intelligibility of the utilitarian sense of "more pain".

I have recently replied to kbog acknowledging this mistake, outlining how I understand the utilitarian sense of "more pain", and then presenting an argument for why my sense of "more pain" is the one that really matters.

I'd be interested to know what you think.

Comment author: Alex_Barry 12 April 2018 01:13:34PM *  1 point [-]

Thanks for getting back to me, I've read your reply to kblog, but I don't find your argument especially different to those you laid out previously (which given that I always thought you were trying to make the moral case should maybe not be surprising). Again I see why there is a distinction one could care about, but I don't find it personally compelling.

(Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as 'two people experiencing the same pain is twice as bad as one person experiencing that pain' (there is some change from discussing 'total pain' to 'badness' here, but I think it still fits with our usage).)

A couple of brief points in favour of the classical approach:

  • It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).
  • As discussed in other comments, it also has other pleasing properties, such as the veil of ignorance as discussed in other comments.

One additional thing to note is that dropping the comparability of 'non-purely experientially determined' and 'purely experientially determined' experiences (henceforth 'Comparability') does not seem to naturally lead to a specific way of evaluating different situations or weighing them against each other.

For example, you suggest in your post that without Comparability the morally correct course of action would be to give each person a chance of being helped in proportion to their suffering, but this does not necessarily follow. One could imagine others who also disagreed with Comparability, but thought the appropriate solution was to always help the person suffering the most, and not care at all about anyone else. To take things to the opposite extreme, someone could also deny Comparability but think that the most important thing was minimizing the number of people suffering at all and not take into account intensity whatsoever (although they would likely justify rejecting Comparability on different grounds to you).

Comment author: Jeffhe  (EA Profile) 12 April 2018 10:37:12PM *  0 points [-]

Hey Alex,

Thanks again for taking the time to read my conversation with kbog and replying. I have a few thoughts in response:

(Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as 'two people experiencing the same pain is twice as bad as one person experiencing that pain' (there is some change from discussing 'total pain' to 'badness' here, but I think it still fits with our usage).)

When you say that many people here would embrace the assumption that "two people experiencing the same pain is twice as bad as one person experiencing that pain", are you using "bad" to mean "morally bad?"

I ask because I would agree if you meant morally bad IF the single person was a subset of the two people. For example, I would agree that Amy and Susie each suffering is twice as morally bad as just Amy suffering. However, I would not agree IF the single person was not a subset of the two (e.g., if the single person was Bob). If the single person was Bob, I would think the two cases are morally just as bad.

Now, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.

However, based on my preferred sense of "more pain", two people suffering involves the same amount of pain as one person suffering, irrespective of whether the single person is a subset or not.

Therefore, you might wonder how I am able to arrive at the different opinions above. More specifically, if I think Amy and Susie each suffering involves the same amount of pain as just Amy suffering, shouldn't I be committed to saying that the former is morally just as bad as the latter, rather than twice as morally bad (which is what I want to say?)

I don't think so. I think the pareto principle provides an adequate reason for taking Amy and Susie each suffering to be morally worse than just Amy's suffering. As Otsuka (a philosopher at Harvard) puts it, the Pareto states that “One distribution of benefits over a population is strictly Pareto superior to another distribution of benefits over that same population just in case (i) at least one person is better off under the former distribution than she would be under the latter and (ii) nobody is worse off under the former than she would be under the latter." Since just Amy suffering (i.e. Susie not suffering) is Pareto superior to Amy and Susie each suffering, therefore just Amy suffering is morally better than Amy and Susie each suffering. In other words, Amy and Susie each suffering is morally worse than just Amy suffering. Notice, however, that if the single person were Bob, condition (ii) would not be satisfied because Bob would be made worse off. The Pareto principle is based on the appealing idea that we shouldn't begrudge another person an improvement that costs us nothing. Amy shouldn't begrudge Susie an improvement that costs her nothing.

Anyways, I just wanted to make that aspect of my thinking clear. So I would agree with you that more people suffering is morally worse than fewer people suffering as long as the smaller group of people is a subset of the larger group, due to the Pareto principle. But I would not agree with you that more people suffering is morally worse than fewer people suffering if those fewer people are not a subset of the larger group, since the Pareto principle is not a basis for it, nor is there more pain in the former case than the latter case on my preferred sense of "more pain". And since I think my preferred sense of "more pain" is the one that ultimately matters because it respects the fact that pain matters solely because of how it feels, I think others should agree with me.

A couple of brief points in favour of the classical approach: It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).

I'm not sure I see the advantage here, or what the alleged advantage is. I don't see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.

As discussed in other comments, it also has other pleasing properties, such as the veil of ignorance as discussed in other comments.

The veil of ignorance approach at minimum supports a policy of helping the greater number (given the stipulation that each person has an equal chance of occupying anyone's position). However, as I argued, this stipulation is not true OF the real world because each of us didn't actually have an equal chance of being in any of our position, and what we should do should be based on the facts, and not on a stipulation. In kbog's latest reply to me regarding the veil of ignorance, he seems to argue that the stipulation should determine what we ought to do (irrespective of whether it is true in the actual world) because "The reason we look at what they would agree to from behind the veil of ignorance as opposed to outside is that it ensures that they give equal consideration to everyone, which is a basic principle that appeals to us as a cornerstone of any decent moral system." I have yet to respond to this latest reply because I have been too busy arguing about our senses of "more pain", but if I were to respond, I would say this: "I agree that we should give equal consideration to everyone, which is why I believe we should give each person a chance of being helped proportional to the suffering they face. The only difference is that this is giving equal consideration to everyone in a way that respects the facts of the world." Anyways, I don't want to say too much here, because kbog might not see it and it wouldn't be fair if you only heard my side. I'll respond to kbog's reply eventually (haha) and you can follow the discussion there if you wish.

Let me just add one thing: Based on Singer's intro to Utilitarianism, Harsanyi argued that the veil of ignorance also entails a form of utilitarianism on which we ought to maximize average utility, as opposed to Rawls' claim that it entails giving priority to the worst off. If this is right, then the veil of ignorance approach doesn't support classical utilitarianism which just says we ought to maximize utility and not average utility.

One could imagine others who also disagreed with Comparability, but thought the appropriate solution was to always help the person suffering the most, and not care at all about anyone else.

Yes, they could, but I also argued that who suffers matters in my response to Objection 2, and to simply help the person suffering the most is to ignore this fact. Thus, even if one person suffering a lot is experientially worse (and thus morally worse) than many others each suffering something less, I believe we should give the others some chance of being helped. That is to say, in light of the fact that who suffers matters, I believe it is not always right to prevent the morally worse case.

To take things to the opposite extreme, someone could also deny Comparability but think that the most important thing was minimizing the number of people suffering at all and not take into account intensity whatsoever (although they would likely justify rejecting Comparability on different grounds to you).

While this is a possible position to hold, it is not a plausible one, because it effectively entails that the numbers matter in itself. That is, such a person thinks he should save the many over one other person not because he thinks the many suffering involves more pain than the one suffering (for he denies that a non-purely experientially determined amount of pain can be compared with a purely experientially determined amount of pain). Rather, he thinks he should save the many solely because they are many. But it is hard to see how numbers matter in itself.

Comment author: Alex_Barry 13 April 2018 10:09:42AM *  1 point [-]

A couple of brief points in favour of the classical approach: It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).

I'm not sure I see the advantage here, or what the alleged advantage is. I don't see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.

The argument is that if:

  • The amount of 'total pain' is determined by the maximum amount of suffering people experienced by any given person (Which I think is what you are arguing)
  • There could be an alien civilization containing a being experiencing more suffering than any human is capable of experiencing (you could also just use a human being tortured if you liked for a less extreme but clearly applicable case)
  • In this case, then the amount of 'total pain' is always at least that very large number, such that none of your actions can change it at all.
  • Thus (and you would disagree with this implication due to your adoption of the Pareto principle) since the level of 'total pain' is the morally important thing, all of your possible actions are morally equivalent.

As I mention I think you escape this basic formulation of the problem by your adoption of the Pareto principle, but a more complicated version causes the same issue:

This is essentially just applying the non-identity problem to the example above. (weirdly enough I think the best explanation I've seen of the non-identity problem is the second half of the 'the future' section of Derek Parfit wikipedia page )

The argument goes something like:

  • D1 If we adopt that 'total pain' is the maximal pain experienced by any person for whom we can effect how much pain their experience (an attempt to incorporate the Pareto principle into the definition for simplicity's sake).
  • A1 At some point in the far future there is almost certainly going to be someone experiencing extreme pain. (Even if humanity is wiped out, so most of the future has no one in it, that wiping out is likely to involve extreme pain for some).
  • A2 Due to chaotic nature of the world, and the strong dependence on birth timings of personal identity (if the circumstances of ones conception change even very slightly then your identity will almost certainly be completely different) any actions in the world now will within a few generations result in a completely different set of people existing.
  • C1 Thus by A1 the future is going to contain someone experiencing extreme pain, but by A2 exactly who this person is will vary with any different courses of action, thus by D1 the 'total pain' in all cases is uniformly vary high.

This is similar to the point made in JanBrauner, however I did not find your response to their comment particularly engaged the core point of the extreme unpredictability of the maximum pain caused by an act.

After your most recent comment I am generally unsure exactly what you are arguing for in terms of moral theories. When arguing on which form of pain is morally important you seem to make a strong case that one should consider the 'total pain' in a situation solely by whatever pain involved is most extreme. However when discussing moral recommendations you don't completely focus on this. Thus I'm not sure if this comments and its examples will miss the mark completely.

(There are also more subtle defenses, such as those relating to how much one cares about future people etc. which have thusfar been left out of the discussion).

Comment author: Jeffhe  (EA Profile) 13 April 2018 11:43:01PM *  0 points [-]

Thanks for the exposition. I see the argument now.

You're saying that, if we determined "total pain" by my preferred approach, then all possible actions will certainly result in states of affairs in which the total pains are uniformly high with the only difference between the states of affairs being the identity of those who suffers it.

I've since made clear to you that who suffers matters to me too, so if the above is right, then according to my moral theory, what we ought to do is assign an equal chance to any possible action we could take, since each possible action gives rise to the same total pain, just suffered by different individuals.

Your argument would continue: Any moral theory that gave this absurd recommendation cannot be correct. Since the root of the absurdity is my preferred approach to determining total pain, that approach to determining total pain must be problematic too.

My response:

JanBrauner, if I remember correctly was talking about extreme unpredictability, but your argument doesn't seem to be based on unpredictability. If A1 and A2 are true, then each possible action more-or-less seems to inevitably result in a different person suffering maximal pain.

Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).

I think there is no more absurdity to assigning each possible action an equal chance (assuming A1 and A2 are true) than there is in, say, flipping a coin between saving a million people on one island from being burned alive and saving one other person on another island from being burned alive. Since I don't find the latter absurd at all (keeping in mind that none of the million will suffer anything worse than the one, i.e. that the one would suffer no less than any one of the million), I would not find the former absurd either. Indeed, giving each person an equal chance of being saved from being burned alive seems to me like the right thing to do given that each person has the same amount to suffer. So I would feel similarly about assigning each possible action an equal chance (assuming A1 and A2 are true).

Comment author: Alex_Barry 17 April 2018 01:38:57PM *  0 points [-]

I was trying to keep the discussions of 'which kind of pain is morally relevant' and of your proposed system of giving people a chance to be helped in proportion to their suffering sperate. It might be that they are so intertwined as for this to be unproductive, but I think I would like you to response to my comment about the latter before we discuss it further.

You're saying that, if we determined "total pain" by my preferred approach, then all possible actions will certainly result in states of affairs in which the total pains are uniformly high with the only difference between the states of affairs being the identity of those who suffers it.

Given that you were initially arguing (with kblog etc.) for this definition of total pain, independent of any other identity considerations, this seems very relevant to that discussion.

Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).

But this seems extremely far removed from any day to day intuitions we would have about morality, no? If you flipped a coin to decide whether you should murder each person you met, (a very implementable approximation of this result) I doubt many would find this justified on the basis that someone in the future is going to be suffering much more than them.

I think there is no more absurdity to assigning each possible action an equal chance (assuming A1 and A2 are true) than there is in, say, flipping a coin between saving a million people on one island from being burned alive and saving one other person on another island from being burned alive.

The issue is this also applied to the case of deciding whether to set the island on fire at all

Comment author: Alex_Barry 13 April 2018 09:03:31AM 0 points [-]

are you using "bad" to mean "morally bad?"

Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!)

I think thinking in terms of 'total pain' is not normally how this is approached, instead one thinks about converting each persons experience into 'utility' (or 'moral badness' etc.) on a personal level, but then aggregates all the different personal utilities into a global figure. I don't know if you find this formulation more intuitively acceptable (it is some sense feels like it respects your reason for caring about pain more).

I bring this up since you are approaching this from a different angle than the usual, which makes peoples standard lines of reasoning seem more complex.

A couple of brief points in favour of the classical approach: It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).

I'm not sure I see the advantage here, or what the alleged advantage is. I don't see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.

I'll discuss this in a separate comment since I think it is one of the strongest argument against your position.

I don't know much about the veil of ignorance, so I am happy to give you that it does not support total utilitarianism.

I believe it is not always right to prevent the morally worse case.

Then I am really not sure at all what you are meaning by 'morally worse' (or 'right'!). In light of this, I am now completely unsure of what you have been arguing the entire time.

Comment author: Jeffhe  (EA Profile) 13 April 2018 07:58:31PM *  0 points [-]

Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!)

I think thinking in terms of 'total pain' is not normally how this is approached, instead one thinks about converting each persons experience into 'utility' (or 'moral badness' etc.) on a personal level, but then aggregates all the different personal utilities into a global figure. I don't know if you find this formulation more intuitively acceptable (it is some sense feels like it respects your reason for caring about pain more).

So you're suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:

  1. Assign a moral value to each person's experiences based on its overall what-it's-like. For example, if someone is to experience 5 headaches, we are to assign a single moral value to his 5 headaches based on how experientially bad the what-it's-like-of-going-through-5-headaches is. If going through 5 such headaches is about experientially as bad as going through 1 major headache, then we would assign the same moral value to someone's 5 minor headaches as we would to someone else's 1 major headache.

  2. We then add up the moral value assigned to each person's experiences to get a global moral value, and compare this moral value to the other global values corresponding to the other states of affairs we could bring about.

This approach reminds me of trade-off situations that involve saving lives instead of saving people from suffering. For example, suppose we can either save Amy's and Susie's life or Bob's life, but we cannot save all. Who do we save? Most people would reason that we should save Amy's and Susie's life because each life is assigned a certain positive moral value, so 2 lives is twice the moral value as 1 life. I purposely avoided talking about trade-off situations involving saving lives because I don't think a life has moral value in itself, yet I anticipated that people would appeal to life having some sort of positive moral value in itself and I didn't want to spend time arguing about that. In any case, if life does have positive moral value in itself, then I think it makes sense to add those values just as it makes sense to add the dollar values of different merchandise. This would result in Amy's and Susie's death being a morally worse thing than Bob's death, and so I would at least agree that what we ought to do in this case wouldn't be to give everyone a 50% chance.

In any case, if we assign a moral value to each person's experience in the same way that we might assign a moral value to each person's life, then I can see how people reach the conclusion that more people suffering a given pain is morally worse than fewer people suffering the given pain (even if the fewer are other people). Moreover, given step 1., I agree that this approach, at least prima facie, respects [the fact that pain matters solely because of how it FEELS] more than the approach that I've attributed to kbog). (I added the "[...]" to make the sentence structure more clear.) As such, this is an interesting approach that I would need to think more about, so thanks for bringing it up. But, even granting this approach, I don't think what we ought to do is to OUTRIGHT prevent the morally worse case; rather we ought to give a higher chance to preventing the morally worse case proportional to how much morally worse it is than the other case. I will say more about this below.

Then I am really not sure at all what you are meaning by 'morally worse' (or 'right'!). In light of this, I am now completely unsure of what you have been arguing the entire time.

Please don't be alarmed (haha). I assume you're aware that there are other moral theories that recognize the moral value of experience (just as utilitarianism does), but also recognizes other side constraints such that, on these moral theories, the right thing to do is not always to OUTRIGHT prevent the morally worst consequence. For example, if a side constraint is true of some situation, then the right thing to do would not be to prevent the morally worst consequence if doing so violates the side constraint. That is why these moral theories are not consequentialist.

You can think of my moral position as like one of these non-consequentialist theories. The one and only side constraint that I recognize is captured by the fact that who suffers matters. Interestingly, this side constraint arises from the fact that experience matters, so it is closer to utilitarianism than other moral theories in this respect. Here's an example of the side constraint in action: Suppose I can either save 100 people from a minor headache or 1 other person from a major headache. Going by my sense of "more pain" (i.e. my way of quantifying and comparing pains), the single person suffering the major headache is morally worse than the 100 people each suffering a minor headache because his major headache is experientially worse than any of the other people's minor headache. But in this case, I would not think the right thing to do is to OUTRIGHT save the person with the major headache (even though his suffering is the morally worse case). I would think that the right thing to do is to give him a higher chance of being saved proportional to how much worse his suffering is experientially speaking than any one of the others (i.e. how much morally worse his suffering is relative to the 100's suffering).

Similarly, if we adopted the approach you outlined above, maybe the 100 people each suffering a minor headache would be the morally worse case. If so, given the side constraint, I would still similarly think that it would not be right to OUTRIGHT save the 100 from their minor headaches. I would again think that the right thing to do would be to give the 100 people a higher chance of being saved proportional to how much morally worse their suffering is relative to the single person's suffering.

I hope that helps.

Comment author: Alex_Barry 13 April 2018 09:46:19PM *  1 point [-]

On 'people should have a chance to be helped in proportion to how much we can help them' (versus just always helping whoever we can help the most).

(Again, my preferred usage of 'morally worse/better' is basically defined so as to mean one always 'should' always pick the 'morally best' action. You could do that in this case, by saying cases are morally worse than one another if people do not have chances of being helped in proportion to how badly off they are. This however leads directly into my next point... )

How much would you be willing to trade off helping people verses the help being distributed fairly? e.g. if you could either have a 95% chance of helping people in proportion to their suffering, but a 5% chance of helping no one, verses a 100% chance of only helping the person suffering the most.

In your reply to JanBrauner you are very willing to basically completely sacrifice this principle in response to practical considerations, so it seems possibly you are not willing to trade off any amount of 'actually helping people' in favour of it, but then it seems strange you argue for it so forcefully.

As a separate point, this form of reasoning seems rather incompatible with your claims about 'total pain' being morally important, and also determined solely by whoever is experiencing the most pain. Thus, if you follow your approach and give some chance of helping people not experiencing the most pain, in the case when you do help them, the 'total pain' does not change at all!

For example:

  • Suppose Alice is experiencing 10 units of suffering (by some common metric)
  • 10n people (call them group B) are experiencing 1 units of suffering each
  • We can help exactly one person, and reduce their suffering to 0

In this case your principle says we should give Alice a 10/(10+10n) = 1/(n+1) chance of being helped, and each person in group B a 1/(10+10n) chance of being helped. But in the case we help someone from group B the level of 'total pain' remains at 10 as Alice is not helped.

This means that n/(n+1) proportion of the time the 'total pain' remains unchanged. i.e. we can make the chance of actually effecting the thing you say is morally important arbitrarily small. It seems strange to say your morally is motivated by x if your actions are so distanced from it that your chance of actually effecting x can go to zero.

Finally I find the claim that this is actually the fairer or more empathetic approach unconvincing. I would argue that whatever fairness you gain by letting there be some chance you help the person experiencing the second-most suffering is outweighed by your unfairness to the person suffering the most.

Indeed, for another example:

  • Say a child (child A) is about to be tortured for the rest of their life, which you can prevent for £2.
  • However another child (child B) has just dropped their ice cream, which has slightly upset them (although not much, they are just a little sad). You could buy them another ice cream for £2, which would cheer them up.

You only have £2, so you can only help one of the children. Under your system you would give some (admittedly (hopefully!) very small) chance that you would help child B. However in the case that you rolled your 3^^^3 sided die and it come up in favour of B, as you started walking over to the ice cream van it seems like it would be hard to say you were acting in accordance with "reason and empathy".

(This was perhaps a needlessly emotive example, but I wanted to hammer home how completely terrible it could be to help the person not suffering the most. If you have a choice between not rolling a die, and rolling a die with a chance of terrible consequences, why take the chance?)

Comment author: Alex_Barry 13 April 2018 08:43:32PM *  0 points [-]

So you're suggesting that most people aggregate different people's experiences as follows:

Well most EAs, probably not most people :P

But yes, I think most EAs apply this 'merchandise' approach weighed by conscious experience.

In regards to your discussion of moral theories, side constraints: I know there are a range of moral theories that can have rules etc. My objection was that if you were not in fact arguing that total pain (or whatever) is the sole determiner of what action is right then you should make this clear from the start (and ideally baked into what you mean by 'morally worse').

Basically I think sentences like:

"I don't think what we ought to do is to OUTRIGHT prevent the morally worse case"

are sufficiently far from standard usage (at least in EA circles) you should flag up that you are using 'morally worse' in a nonstandard way (and possibly use a different term). I have the intuition that if you say "X is the morally relevant factor" then which actions you say are right will depend solely on how they effect X.

Hence if you say 'what is morally relevant is the maximal pain being experienced by someone' when I expect all I need to tell you abut actions for you to decide between them is how they effect the maximal pain being experienced by someone.

Obviously language is flexible but I think if you deviate from this without clear disclaimers it is liable to cause confusion. (Again, at least in EA circles).

I think your argument that people should have a chance to be helped in proportion to how much we could help them is completely separate from your point about Comparability, and we should keep the discussions separate to avoid the chance of confusion. I'll make a separate comment to discuss it,

Comment author: bejaq 30 March 2018 04:20:47PM 1 point [-]

I agree that aggregating suffering of different people is problematic. By necessity, it happens on a rather abstract level, divorced from the experiential. I would say that can lead to a certain impersonal approach which ignores the immediate reality of the human condition. Certainly we should be aware of how we truly experience the world.

However I think here we transcend ethics. We can't hope to resolve deep issues of of suffering within ethics, because we are somewhat egocentric beings by nature. We see only through our eyes and feel our body. I don't see that ethics really can adress that level meaningfully, it requires us to abstract from that existential reality.

For me the alternative is a more pragmatic ethical framework. It acknowledges we are not just ethical beings, but that ethics is important on an interpersonal level. From that point of view helping more people can be the right thing because we are aware we generally cannot truly resolve others suffering on an individual basis. So we are in effect helping the greater system of society or humanity. In that case there's no problem helping a group instead of an individual. We are not trying to help "at the root" - which we may only be able to do for ourselves or perhaps people close to us - but contribute to society in a meaningful way. And on that level there's a practical difference between helping one person or many.

In practice, for me that means I do take effective altruism into account, but also acknowledge its limitations. I'd say everyone does that implicity or explicity.

Comment author: Jeffhe  (EA Profile) 31 March 2018 01:38:58AM *  1 point [-]

Hi bejaq,

Thanks for your thoughtful comment. I think your first paragraph captures well why I think who suffers matters. The connection between suffering and who suffers it is to strong for the former to matter and for the latter not to. Necessarily, pain is pain for someone, and ONLY for that someone. So it seems odd for pain to matter, yet for it not to matter who suffers it.

I would also certainly agree that there are pragmatic considerations that push us towards helping the larger group outright, rather than giving the smaller group a chance.

Comment author: jonathancourtney 16 March 2018 06:53:39PM 1 point [-]

Hey Jeffhe- the position you put forward looks structurally really similar to elements of Scanlon's, and you discuss a dillema that is often discussed in the context of his work (the lifeboat/the rocks example)- It also seems like given your reply to objection 3 you might really like it's approach (if you are not familiar with it already). Subsection 7 of this SEP article (https://plato.stanford.edu/entries/contractualism/) gives a good overview of the case that is tied to the one you discuss. The idea of the separateness of persons, and the idea that one persons pain can't cancel out another person pain, is well represented in Scanlon's work.

I also wonder whether the right way of representing an 'equal chance of being helped' in this model is not to flip a coin for each group, but to roll a N sided dice, where N are the total number of people who could be helped, and then choosing whichever group the person whose number is rolled is in: that way everyone, in some sense, has a chance to be saved, and that chance is, in some sense, equal- without leading to the worrying conclusions that Bob and a million peoples lives ought to be settled through a coin flip (The coin-flipping decision theory could also be abused by dividing up groups differently, i.e. I can always re-describe the world in the way where a person I could help in extreme pain is in one group, and all other people are in a different group, but then I can simply redescribed the world to move that person into the 'all other people' category, and select another person, which seems to mean we can arbitrarily increase the odds of any one person being the right person to help, simply by moving them between the categories- which seems wrong).

Comment author: Jeffhe  (EA Profile) 16 March 2018 10:17:28PM *  0 points [-]

Hi Jonathan,

Thanks for directing me to Scanlon's work. I am adequately familiar with his view on this topic, at least the one that he puts forward in What We Owe to Each Other. There, he tried to put forward an argument to explain why we should save the greater number in a choice situation like the one involving Bob, Amy and Susie, which respected the separateness of persons, but his argument has been well refuted by people like Michael Otsuka (2000, 2006).

Regarding your second point, what reason can you give for giving each person less than the maximum equal chance possible (e.g. 50%) aside from wanting to sidestep a conclusion that is worrying to you? Suppose I choose to give Bob, Amy and Susie each a 1% of being saved, instead of each a 50% of being saved, and I say to them, "Hey none of you have anything to complain about because I'm technically giving each of you an equal chance, even though most likely, none of you will be saved." Each of them can reasonably protest that doing so does not treat them with the appropriate level of concern. Say then, I give each of them a 1/3 chance of being saved (as you propose we do) and again I say to them, "Hey none of you have anything to complain about because I'm technically giving each of you an equal chance". Don't you think they can reasonably protest in the same way until I give them each the maximum equal chance (i.e. 50%)?

Regarding your third point, I don't see how I can divide up the groups differently. They come to me as given. For example, I can't somehow switch Bob and Amy's place such that the choice situation is one of either helping Amy or helping Bob and Susie. How would I do that?

Comment author: Kaj_Sotala 16 March 2018 01:02:36PM 1 point [-]

The following is roughly how I think about it:

If I am in a situation where I need help, then for purely selfish reasons, I would prefer people-who-are-capable-of-helping-me to act in such a way that has the highest probability of helping me. Because I obviously want my probability of getting help, to be as high as possible.

Let's suppose that, as in your original example, I am one of three people who need help, and someone is thinking about whether to act in a way that helps one person, or to act in a way that helps two people. Well, if they act in a way that helps one person, then I have a 1/3 chance of being that person; and if they act in a way that helps two people, then I have a 2/3 chance of being one of those two people. So I would rather prefer them to act in a way that helps as many people as possible.

I would guess that most people, if they need help and are willing to accept help, would also want potential helpers to act in such a way that maximizes their probability of getting help.

Thus, to me, reason and empathy would say that the best way to respect the desires of people who want help, is to maximize the amount of people you are helping.

Comment author: Jeffhe  (EA Profile) 16 March 2018 06:11:47PM 0 points [-]

Hi Kaj,

Thanks for your response. Please refer to my conversation with brianwang712. It addresses this objection!

Comment author: kbog  (EA Profile) 19 March 2018 09:15:55PM *  0 points [-]

But that seems counter to what reason and empathy would lead me to do.

What? It seems to be exactly what reason and empathy would lead one to do. Reason and empathy don't tell you to arbitrarily save fewer people. At best, you could argue that empathy pulls you in neither direction, while conceding that it's still more reasonable to save more rather than fewer. You've not written an argument, just a bald assertion. You're dressing it up to look like a philosophical argument, but there is none.

P1. The degree of suffering in the case of Amy and Susie would be the same as in the case of Bob, even though the number of instances of suffering would differ (e.g., 2:1).

This is because it was stipulated from the outset that Amy, Susie and Bob would each suffer from an equally painful disease if we didn’t help them. Relatedly, and as suggested earlier, it’s not like Amy and Susie would each somehow suffer more than Bob would suffer just because there would be two of them suffering; they would each suffer what they would each suffer (which is no more than what Bob would suffer) and no more. They surely can’t – and therefore wouldn’t – suffer each other’s pain too. For example, Amy cannot, on top of her own suffering, also suffer Susie’s pain, because Susie’s pain cannot be transferred to Amy, and vice versa.

This doesn't answer the objection. There is more suffering when it happens to two people, and more suffering is morally worse. The fact that the level of suffering in each person is the same doesn't imply that they are morally equivalent outcomes. It's like if I said, "safer cars will reduce the number of car fatalities," and then you protested "but EACH CAR FATALITY WILL BE JUST AS BAD", totally ignoring the point that I'm making.

Here, I assume you would say that we should save Emma from the major headache

This is a textbook case of begging the question. No one you're arguing with will grant that we should act differently for cases 2 and 3.

Comment author: Jeffhe  (EA Profile) 19 March 2018 11:17:29PM *  0 points [-]

1) "Reason and empathy don't tell you to arbitrarily save fewer people."

I never said they tell me to arbitrarily save fewer people. I said that they tell us to give each person an equal chance of being saved.

2) "This doesn't answer the objection."

That premise (as indicated by "P1."), plus my support for that premise, was not meant to answer an objection. It was just the first premise of an argument that was meant to answer objection 1.

3) "There is more suffering when it happens to two people, and more suffering is morally worse."

Yes, there is more instances of suffering. But as I have tried to argue, x instances of suffering spread across x people is just as morally bad as 1 instance of the same kind of suffering had by one other person. If by 'more suffering' you meant worse suffering in an experiential sense, then please see my first response to Michael.

4) "The fact that the level of suffering in each person is the same doesn't imply that they are morally equivalent outcomes."

I didn't say it was implied. If I thought it was implied, then my response to Objection 1 would have been much shorter.

5) "This is a textbook case of begging the question."

I don't see how my assumption is anywhere near what I want to conclude. It seems to me like an assumption that is plausibly shared by all. That's why I assumed it in the first place: to show that my conclusion can be arrived at from shared assumptions.

6) "No one you're arguing with will grant that we should act differently for cases 2 and 3."

I would hesitate to use "No one". If this were true, then I would have expected more comments along those lines. More importantly, I wonder why one wouldn't grant that we should act differently in choice situations 2 and 3. If the reason boils down to the thought that 5 minor pains is experientially worse than 1 major pain, regardless if the 5 minor pains are all had felt by one person or spread across 5 different people, then I would point you to my conversation with Michael_S.

Finally, I just want to say that all the people I've conversed with on this forum so far have been very friendly and not dismissive, despite perhaps some differences in view. I wasn't surprised by that because (presumably) most people on here are effective altruists, and it would seem rather odd for an effective altruist - someone who identifies with helping the less fortunate - to be unfriendly or dismissive. Anyways, I do hope to remain unsurprised by that. I think only in a friendly and non-dismissive atmosphere can the interlocutors benefit from their conversation.

Comment author: kbog  (EA Profile) 20 March 2018 12:33:32AM *  0 points [-]

I never said they tell me to arbitrarily save fewer people. I said that they tell us to give each person an equal chance of being saved

But that involves arbitrarily saving fewer people. I mean, you could call that non-arbitrary, since you have some kind of reason for it, but it's fewer people all the same, and it's not clear how reason or empathy would generally lead one to do this. So there is no prima facie case for the position that you're defending.

Yes, there is more instances of suffering. But as I have tried to argue, x instances of suffering spread across x people is just as morally bad as 1 instance of the same kind of suffering had by one other person.

But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people, which presupposes that more total suffering does not necessarily imply worseness in such gedanken.

I didn't say it was implied.

But you need to defend such an implication if you wish to claim that it is not morally worse for more people to suffer an equal amount.

I don't see how my assumption is anywhere near what I want to conclude.

Because anyone who buys the basic arguments for helping more people rather than fewer will often prefer to alleviate five minor headaches rather than one major one, regardless of whether they happen to different people or not.

It seems to me like an assumption that is plausibly shared by all.

OK, well: it's not.

More importantly, I wonder why one wouldn't grant that we should act differently in choice situations 2 and 3.

Because there is no reason for the distribution of certain wrongs across different people to affect the badness of those wrongs, as our account of the badness of those wrongs does not depend on any facts about the particular people to whom they occur.

I would hesitate to use "No one". If this were true, then I would have expected more comments along those lines.

brianwang712's response based on the Original Position implies that the decision to not prevent 5 minor headaches is wrong, even though he didn't take the time to spell it out.

If the reason boils down to the thought that 5 minor pains is experientially worse than 1 major pain, regardless if the 5 minor pains are all had felt by one person or spread across 5 different people, then I would point you to my conversation with Michael_S

Look, your comments towards him are very long and convoluted. I'm not about to wade through it just to find the specific 1-2 sentences where you go astray. Especially when you stuff posts with "updates" alongside copies of your original comments, I find it almost painful to look through.

Finally, I just want to say that all the people I've conversed with on this forum so far have been very friendly and not dismissive, despite perhaps some differences in view. I wasn't surprised by that because (presumably) most people on here are effective altruists, and it would seem rather odd for an effective altruist - someone who identifies with helping the less fortunate - to be unfriendly or dismissive. Anyways, I do hope to remain unsurprised by that. I think only in a friendly and non-dismissive atmosphere can the interlocutors benefit from their conversation.

I don't see why identifying with helping the less fortunate (something which almost everybody does, in some fashion or other) implies that we should hold philosophical arguments to gentle standards. The time and knowledge of people who help the less fortunate is particularly valuable, so one should be willing and able to credibly signal the occasional times when one is confident that the people who help the less fortunate ought to be focusing elsewhere. Conversations mustn't be friendly to be informative, and I'm really not being dismissive about anything you write which I do have the time to read.

Comment author: Jeffhe  (EA Profile) 20 March 2018 03:35:47AM *  0 points [-]

1) "But that involves arbitrarily saving fewer people. I mean, you could call that non-arbitrary, since you have some kind of reason for it, but it's fewer people all the same, and it's not clear how reason or empathy would generally lead one to do this. So there is no prima facie case for the position that you're defending."

To arbitrarily save fewer people is to save them on a whim. I am not suggesting that we should save them on a whim. I am suggesting that we should give each person an equal chance of being saved. They are completely different ideas.

2) "But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people, which presupposes that more total suffering does not necessarily imply worseness in such gedanken."

Please show me where I supposed that 5 minor headaches are MORALLY worse when they happen to one person than when they happen to multiple people. In both choice situations 2 and 3, I provided REASONS for saying

A) why 5 minor headaches all had by one person is morally worse than 1 major headache had by one person, and

B) why 1 major headache had by one person is morally worse than 5 minor headaches spread across 5 people.

From A. and B., you can infer that I believe 5 minor headaches all had by one person is morally worse than 5 minor headaches spread across 5 persons, but don't say that I supposed this. I provided reasons. You can reject those reasons, but that is a different story.

If you meant that I supposed that 5 minor headaches are EXPERENTIALLY worse when they happen to one person than when they happen to multiple people, sure, it can be inferred from what I wrote that I was supposing this. But importantly, to make this assumption is not a stretch as it seems (at least to me) like an assumption plausibly shared by many. But it turns out that Michael_S disagreed, at which time I was glad to defend this assumption. More importantly, even if I made this supposition (as we have to start from somewhere), it does not mean that by doing so, I was simply assuming and not arguing for what you quoted.

3) "But you need to defend such an implication if you wish to claim that it is not morally worse for more people to suffer an equal amount."

If you don't see an argument in my response to Objection 1, I'll live with that since I put a lot of time into writing that essay and no one else has said the same.

4) "Because anyone who buys the basic arguments for helping more people rather than fewer will often prefer to alleviate five minor headaches rather than one major one, regardless of whether they happen to different people or not."

By basic arguments, I presume you mean utilitarian arguments. First off, I was not writing this for a utilitarian audience. If I writing this for an audience that finds it intuitive to save Amy and Susie instead of Bob, and I was trying to show how other (perhaps more basic) intuitions that I assumed were commonly held (i.e. saving one from a major headache instead of 5 each from a minor headache) could provide the ingredients for showing that we should provide each of them with an equal chance of being helped.

If I was writing this for strictly a utilitarian audience, I would have taken a different approach which would have included explaining why 5 pains all had by one person is experentially worse than 5 pains spread across 5 people.

Many people who are effective altruists have reasons for helping people, such as the pond argument, but not reasons for helping the many over the few. So it is uncharitable of you to simply assume that my audience are all utilitarians.

5) "brianwang712's response based on the Original Position implies that the decision to not prevent 5 minor headaches is wrong, even though he didn't take the time to spell it out."

Not true. It is not clear what the conclusion from the original position would be when the levels of pain for the people involved differ. Some people are extremely risk-adverse to extreme pains, and may not agree to a policy of helping the greater number when what is at stake for the few is really bad pain.

6) "Look, your comments towards him are very long and convoluted. I'm not about to wade through it just to find the specific 1-2 sentences where you go astray. Especially when you stuff posts with "updates" alongside copies of your original comments, I find it almost painful to look through."

I'm sorry you find them convoluted. I updated the very first replies to Brian and Michael_S in order to try to make my position more clear for first-time readers like you. I spent a lot of time on trying to make my replies more clear because I don't want to waste reader's time. If I failed to do that, I can only say I tried.

7) "I don't see why identifying with helping the less fortunate (something which almost everybody does, in some fashion or other) implies that we should hold philosophical arguments to gentle standards."

I never asked for gentle standards. I asked for a non-dismissive and friendly attitude.

8) "The time and knowledge of people who help the less fortunate is particularly valuable, so one should be willing and able to credibly signal the occasional times when one is confident that the people who help the less fortunate ought to be focusing elsewhere."

I didn't quite understand the latter half, but yes, their time is valuable, which is why I've tried to be as clear I can. In any case, it is a good thing to critically examine one's own views from time to time, no matter how vital one's time seems under the supposition of that view. So - if I understood the latter part correctly - you needn't worry so much about saving other people's time from my post.

9) "Conversations mustn't be friendly to be informative, and I'm really not being dismissive about anything you write which I do have the time to read."

A person (at least speaking for myself) is much more receptive to the content of another's comment when they are put in a friendly (though demanding) manner. Thus, friendliness helps make conversation more informative.

Whereas dismissive and unfriendly comments like "I'm not about to wade through it just to find the specific 1-2 sentences where you go astray." or "I find it almost painful to look through." do not.

P.S. I will not be replying to any more of your comments that I feel are either uncharitable, dismissive or shows a lack of effort spent on understanding my position.

Ops, just noticed I missed a comment you made:

10) "Because there is no reason for the distribution of certain wrongs across different people to affect the badness of those wrongs, as our account of the badness of those wrongs does not depend on any facts about the particular people to whom they occur."

As I see it, a case or state of affairs in which 5 minor headaches are all felt by one person is MORALLY WORSE than a case in which 5 minor headaches are spread across 5 persons because 5 minor headaches all felt by one person is EXPERIENTIALLY WORSE than 5 minor headaches spread across 5 persons.

I take experience to be the only morally relevant factor, and in this way, I am a moral singularist (as opposed to pluralist). For why I think the former is experientially worse than the latter, please at least read my first reply to Michael_S. Thanks.

Comment author: kbog  (EA Profile) 20 March 2018 07:53:15AM *  0 points [-]

From A. and B., you can infer that I believe 5 minor headaches all had by one person is morally worse than 5 minor headaches spread across 5 persons, but don't say that I supposed this. I provided reasons.

You simply assert that we would rather save Emma's major headache rather than five minor ones in case 3. But if you've stipulated that people would rather endure one big headache than five minor ones, then the big headache has more disutility. Just because the minor ones are split among different people doesn't change the story. I just don't follow the argument here.

If you don't see an argument in my response to Objection 1, I'll live with that since I put a lot of time into writing that essay and no one else has said the same.

My whole point here is that your response to Objection 1 doesn't do any work to convince us of your premises regarding the headaches. Yeah there's an argument, but its premise is both contentious and undefended.

Many people who are effective altruists have reasons for helping people, such as the pond argument, but not reasons for helping the many over the few. So it is uncharitable of you to simply assume that my audience are all utilitarians.

I'm not just speaking for utilitarians, I'm speaking for anyone who doesn't buy the premise for choice 3. I expect that lots of non-utilitarians would reject it as well.

Not true. It is not clear what the conclusion from the original position would be when the levels of pain for the people involved differ. Some people are extremely risk-adverse to extreme pains, and may not agree to a policy of helping the greater number when what is at stake for the few is really bad pain

The original position argument is not an empirical prediction of what humans would choose in such-and-such circumstances, it's an analysis of what we would expect of them as the rational thing to do, so the hedonist utilitarian points out that risk aversion violates the axioms of expected utility theory and it would be rational of people to not make that choice, whereas the preference utilitarian just calibrates the utility scale to people's preferences anyway so that there isn't any dissonance between what people would select and what utilitarianism says.

Comment author: Jeffhe  (EA Profile) 20 March 2018 05:59:02PM *  0 points [-]

1) "You simply assert that we would rather save Emma's major headache rather than five minor ones in case 3. But if you've stipulated that people would rather endure one big headache than five minor ones, then the big headache has more disutility. Just because the minor ones are split among different people doesn't change the story. I just don't follow the argument here."

I DO NOT simply assert this. In case 3, I wrote, "Here, I assume you would say that we should save Emma from the major headache or at least give her a higher chance of being saved because a major headache is morally worse than 5 minor headaches spread across 5 persons and it's morally worse BECAUSE a major headache hurts more (in some non-arbitrary sense) than the 5 minor headaches spread across 5 people. Here, the non-arbitrary sense is straightforward: Emma would be hurting more than any one of the 5 others who would each experience only 1 minor headache." (I capped 'because' for emphasis here)

You would not buy that reason I gave (because you believe 5 minor headaches, spread across 5 people, is experientially worse than a major headache), but that is a different story. If you were more charitable and patient while reading my post, thinking about who my audience is (many of whom aren't utilitarians and don't buy into interpersonal aggregation of pains) etc, I doubt you would be leveling all the accusations you have against me. It wastes both your time and my time to have to deal with them.

2) "My whole point here is that your response to Objection 1 doesn't do any work to convince us of your premises regarding the headaches. Yeah there's an argument, but its premise is both contentious and undefended."

I was just using your words. You said "But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people." As I said, I assumed a premise that I thought the vast majority of my audience would agree with (i.e., at bottom, that 5 minor headaches all had by one person is experientially worse than 5 minor headaches spread across 5 people). If YOU find that premise contentious, great, we can have a discussion about it, but please don't make it sound like my argument doesn't do any work for anyone.

3) "I'm not just speaking for utilitarians, I'm speaking for anyone who doesn't buy the premise for choice 3. I expect that lots of non-utilitarians would reject it as well."

Well, I don't, which is why I assumed the premise in the first place. I mean I wouldn't assume a premise that I thought the majority of my audience will disagree with. It's certainly not obvious to me that 5 minor headaches all had by one person is experientially just as bad as 5 minor headaches spread across 5 people.

4) "The original position argument is not an empirical prediction of what humans would choose in such-and-such circumstances, it's an analysis of what we would expect of them as the rational thing to do, so the hedonist utilitarian points out that risk aversion violates the axioms of expected utility theory and it would be rational of people to not make that choice, whereas the preference utilitarian just calibrates the utility scale to people's preferences anyway so that there isn't any dissonance between what people would select and what utilitarianism says."

Sorry, I'm not familiar with the axioms of expected utility theory or with preference utilitarianism. But perhaps I can understand your position by asking 2 questions:

1) According to you, would it be rational behind the veil of ignorance to agree to a policy that said: In a trade off situation between saving a person from torture or saving another person from torture AND saving a third person from a minor headache, the latter two are to be saved. 2) In an actual trade off situation of this kind, would you think we ought to save the latter two?

Comment author: kbog  (EA Profile) 20 March 2018 09:12:57PM *  0 points [-]

Well, I don't, which is why I assumed the premise in the first place. I mean I wouldn't assume a premise that I thought the majority of my audience will disagree with. It's certainly not obvious to me that 5 minor headaches all had by one person is experientially just as bad as 5 minor headaches spread across 5 people.

But if anyone did accept that premise then they would already believe that the number of people suffering doesn't matter, just the intensity. In other words, the only people to whom this argument applies are people who would agree with you in the first place that Amy and Susie's suffering is not a greater problem than Bob's suffering. So I can't tell if it's actually doing any work. If not, then it's just adding unnecessary length. That's what I mean when I say that it's too long. Instead of adding the story with the headaches in a separate counterargument, you could have just said all the same things about Amy and Susie and Bob's diseases in the first place, making your claim that Amy and Susie's diseases are not experientially worse than Bob's disease and so on.

Sorry, I'm not familiar with the axioms of expected utility theory or with preference utilitarianism.

PU says that we should assign moral value on the basis of people's preferences for them. So if someone thinks that being tortured is really really really bad, then we say that it is morally really really really bad. We give the same weight to things that people do. If you say that someone is being risk-averse, that means (iff you're using the term correctly) that they're putting so much effort into avoiding a risk that they are reducing their expected utility. That means that they are breaking at least one of the axioms of the Von Neumann-Morgenstern Utility Theorem, which (one would argue, or assert) means that they are being irrational.

1) According to you, would it be rational behind the veil of ignorance to agree to a policy that said: In a trade off situation between saving a person from torture or saving another person from torture AND saving a third person from a minor headache, the latter two are to be saved. 2) In an actual trade off situation of this kind, would you think we ought to save the latter two?

Yes to both.

Comment author: Jeffhe  (EA Profile) 21 March 2018 01:14:02AM *  0 points [-]

1) "But if anyone did accept that premise then they would already believe that the number of people suffering doesn't matter, just the intensity. In other words, the only people to whom this argument applies are people who would agree with you in the first place that Amy and Susie's suffering is not a greater problem than Bob's suffering. So I can't tell if it's actually doing any work. If not, then it's just adding unnecessary length. That's what I mean when I say that it's too long. Instead of adding the story with the headaches in a separate counterargument, you could have just said all the same things about Amy and Susie and Bob's diseases in the first place, making your claim that Amy and Susie's diseases are not experientially worse than Bob's disease and so on."

The reason why I discussed those three cases was to answer the basic question: what makes one state of affairs morally worse than another. Indeed, given my broad audience, some who have no philosophy background, I wanted to start from the ground up.

From that discussion, I gathered two principles that I used to support premise 2 of my argument against Objection 1. I say "gathered" and not "deduced" because you actually don't disagree with those two principles, even though you disagree with an assumption I made in one of the cases (i.e. case 3). What your disagreement with that assumption indicates is a disagreement with premise 1 of my argument against Objection 1.

P1. read: "The degree of suffering in the case of Amy and Susie would be the same as in the case of Bob, even though the number of instances of suffering would differ (e.g., 2:1)."

You disagree because you think Amy's and Susie's pains would together be experientially worse than Bob's pain.

All this is to say that I don't think the discussion of the 3 cases was unnecessary, because it served the important preliminary goal of establishing what makes one state of affairs morally worse than another.

However, it seems like I really should have defended P1. of my argument (and similarly my assumption in case 3) more thoroughly. So I do admit that my post is lacking in this respect, which I appreciate you're pointing out. I'm also sure there are ways to make it more clear and concise. I will consider your suggested approach during future editting sessions.

Update (Mar 21): After thinking through what you said some more, I've decided I'm going to re-do my response to Objection 1 along the lines of what you're suggesting. Thanks for motivating this improvement.

2) "PU says that we should assign moral value on the basis of people's preferences for them. So if someone thinks that being tortured is really really really bad, then we say that it is morally really really really bad. We give the same weight to things that people do. If you say that someone is being risk-averse, that means (iff you're using the term correctly) that they're putting so much effort into avoiding a risk that they are reducing their expected utility. That means that they are breaking at least one of the axioms of the Von Neumann-Morgenstern Utility Theorem, which (one would argue, or assert) means that they are being irrational."

Thanks for that explanation. I see where I went wrong in my previous reply now, so I concede this point.

3) "Yes to both."

Ok, interesting. And, just out of curiosity, you don't consider this as biting a bullet? I mean there are people who have given up on the veil-of-ignorance approach specifically because they think it is morally unacceptable to not give the single person ANY chance of being saved from torture just because it comes with the additional, and relatively trivial, benefit of relieving a minor headache.

P.S. I will reply to your other comment after I've read the paper you linked me to. But, I do want to note that you were being very uncharitable in your reply that "Stipulations can't be true or false - they're stipulations. It's a thought experiment for epistemic purposes." Clearly stipulations/suppositions cannot be false relative to the thought experiment. But surely they can be false relative to reality - to what is actually the case.

Comment author: kbog  (EA Profile) 24 March 2018 09:29:37PM *  0 points [-]

I don't think the discussion of the 3 cases was unnecessary, because it served the important preliminary goal of establishing what makes one state of affairs morally worse than another.

But you are trying to argue about what makes one state of affairs morally worse than another. That is what you are trying to do in the first place. So it's not, and cannot be, preliminary. And if you started from the ground up then it would have contained something that carried force to utilitarians for instance.

If you disagree, try to sketch out a view (that isn't blatantly logically inconsistent) where someone would have agreed with you on Amy/Susan/Bob but disagreed on the headaches.

Ok, interesting. And, just out of curiosity, you don't consider this as biting a bullet?

How is it biting a bullet to prefer to save one person being tortured AND one person with a headache, compared to simply saving one person being tortured?

I struggle to see how anyone might find that position counterintuitive. Rather, accepting the converse choice seems like biting the bullet.

I mean there are people who have given up on the veil-of-ignorance approach specifically because they think it is morally unacceptable to not give the single person ANY chance of being saved from torture just because it comes with the additional, and relatively trivial, benefit of relieving a minor headache.

Making the other choice also gives someone no chance of being saved from torture, and it also gives someone no chance of being saved from a headache, so I don't see what could possibly lead one to prefer it.

And merely having a "chance" of being saved is morally irrelevant. Chances are not things that exist in physical or experiential terms the way that torture and suffering do. No one gives a shit about merely having a chance of being saved; someone who had a chance of being saved and yet is not saved is no better off than someone who had no chance of being saved from the beginning. The reason that we value a chance of being saved is that it may lead to us actually being saved. We don't sit on the mere fact of the chance and covet it as though it were something to value on its own.

Comment author: Jeffhe  (EA Profile) 27 March 2018 10:54:58PM *  0 points [-]

1) "But you are trying to argue about what makes one state of affairs morally worse than another. That is what you are trying to do in the first place. So it's not, and cannot be, preliminary. And if you started from the ground up then it would have contained something that carried force to utilitarians for instance.

If you disagree, try to sketch out a view (that isn't blatantly logically inconsistent) where someone would have agreed with you on Amy/Susan/Bob but disagreed on the headaches."

Arguing for what factors are morally relevant in determining whether one case is morally worse than another is preliminary to arguing that some specific case (i.e. Amy and Susie suffering) is morally just as bad as another specific case (i.e. Bob suffering). My 3 cases were only meant to do the former. From the 3 cases, I concluded:

  1. That the amount of pain is a morally relevant factor in determining whether one case is morally worse than another.

  2. That the number of instances of pain is a morally relevant factor only to the extent that they affect the amount of pain at issue. (i.e. the number of instances of pain is not morally relevant in itself).

I take that to be preliminary work. Where I really dropped the ball was in my lackluster argument for P1 (and, likewise, for my assumption in case 3). No utilitarian would have found it convincing, and thus I would not have succeeded in convincing them that the outcome in which Amy and Susie both suffer is morally just as bad as the outcome in which only Bob suffers, even if they agreed with 1. and 2., which they do.

Anyways, to the extent that you think my argument for P1 sucked to the point where it was like I was begging the question against the utilitarian, I'm happy to concede this. I have since reworked my response to Objection 1 as a result, thanks to you.

2) "How is it biting a bullet to prefer to save one person being tortured AND one person with a headache, compared to simply saving one person being tortured?

I struggle to see how anyone might find that position counterintuitive. Rather, accepting the converse choice seems like biting the bullet."

Because you effectively deny the one person ANY CHANCE of being helped from torture SIMPLY BECAUSE you can prevent an additional minor headache - a very very very minor one - by helping the two. Anyways, a lot of people think that is pretty extreme. If you don't think so, that's perhaps mainly because you don't believe WHO SUFFERS MATTERS. If that's the case, then I would encourage you to reread my response to Objection 2, where I make the case that who suffers is of moral significance.

3) "Making the other choice also gives someone no chance of being saved from torture, and it also gives someone no chance of being saved from a headache, so I don't see what could possibly lead one to prefer it."

You do give each party a 50% chance of being saved by choosing to flip a coin, instead of choosing to just help one party over the other. I prefer giving a 50% chance to each party because

A) I don't think the case in which the two would suffer involves more pain than the case in which the one would (given our discussion under Michael_S's post),

B) I believe who suffers matters (given my response to Objection 2)

Even if you disagree with me on A), I think if you agreed with me on B), you would at least give the one person a 49% of being helped, and the other two a 51% of being helped.

It is true that once the coin has been flipped, one party still ends up suffering at the end of the day. But that does not mean that they didn't at one point actually have a 50% of being helped.

4) "And merely having a "chance" of being saved is morally irrelevant. Chances are not things that exist in physical or experiential terms the way that torture and suffering do. No one gives a shit about merely having a chance of being saved; someone who had a chance of being saved and yet is not saved is no better off than someone who had no chance of being saved from the beginning. The reason that we value a chance of being saved is that it may lead to us actually being saved. We don't sit on the mere fact of the chance and covet it as though it were something to value on its own."

I agree that the only reason that we value a chance of being saved is that it may lead to us actually being saved, and in that sense, we don't value it in itself. But I don't get why that entails that giving each party a 50% of being saved is not what we should do.

Btw, sorry I haven't replied to your response below brian's discussion yet. I haven't found the time to read that article you linked. I do plan to reply sometime soon.

Also, can you tell me how to quote someone's text in the way that you do in your responses to me? It is much cleaner than my number listing and quotations. Thanks.