Hide table of contents

Update on Mar 21: I have completely reworked my response to Objection 1 to make it more convincing to some and hopefully more clear. I would also like to thank everyone who has responded thus far, in particular brianwang712, Michael_S, kbog and Telofy for sustained and helpful discussions.

Update on Apr 10: I have added a new objection (Objection 1.1) that captures an objection that kbog and Michael_S have raised to my response to Objection 1.  I'd also like to thank Alex_Barry for a sustained and helpful discussion.

Update on Apr 24: I have removed Objection 1.1 temporarily. It is undergoing revision to be more clear.  

 

Hey everyone,

This post is perhaps unlike most on this forum in that it questions the validity of effective altruism rather than assumes it.

A. Some background:

I first heard about effective altruism when professor Singer gave a talk on it at my university a few years ago while I was an undergrad. I was intrigued by the idea. At the time, I had already decided that I would donate the vast majority of my future income to charity because I thought that preventing and/or alleviating the intense suffering of others is a much better use of my money than spending it on personal luxuries. However, the idea of donating my money to effective charities was a new one to me. So, I considered effective altruism for some time, but soon I came to see a problem with it that to this day I cannot resolve. And so I am not an effective altruist (yet).

Right now, my stance is that the problem I've identified is a very real problem. However, given that so many intelligent people endorse effective altruism, there is a good chance I have gone wrong somewhere. I just can’t see where. I'm currently working on a donation plan and completing the plan requires assessing the merits of effective altruism. Thus, I would greatly appreciate your feedback. 

Below, I state the problem I see with effective altruism, some likely objections and my responses to those objections.

Thanks in advance for reading! 

 

B. The problem I see with effective altruism:

Suppose we find ourselves in the following choice situation: With our last $10, we can either help Bob avoid an extremely painful disease by donating our $10 to a charity working in his area, or we can help Amy and Susie each avoid an equally painful disease by donating our $10 to a more effective charity working in their area, but we cannot help all three. Who should we help?

Effective altruism would say that we should help the group consisting of Amy and Susie since that is the more effective use of our $10. Insofar as effective altruism says this, it effectively denies Bob (and anyone else in his place) any chance of being helped. But that seems counter to what reason and empathy would lead me to do.

Yes, Susie and Amy are two people, and two is more than one, but were they to suffer (as would happen if we chose to help Bob), it is not like any one of them would suffer more than what Bob would otherwise suffer. Indeed, were Bob to suffer, he would suffer no less than either Amy or Susie. Susie’s suffering would be felt by Susie alone. Amy’s suffering would be felt by Amy alone. And neither of their suffering would be greater than Bob’s suffering. So why simply help them over Bob rather than give all of them an equal chance of being helped by, say, tossing a coin? (footnote 1)

Footnote 1: A philosopher named John Taurek first discussed this problem and proposed this solution in his paper "Should the Numbers Count?" (1977) 

 

C. Some likely objections and my responses:

Objection 1:

One might reply that two instances of suffering is morally worse than one instance of the same kind of suffering and that we should prevent the morally worse case (e.g., the two instances of suffering), so we should help Amy and Susie.

My response:

I don’t think two instances of suffering, spread across two people (e.g. Amy and Susie), is a morally worse case than one instance of the same kind of suffering had by one other person (e.g. Bob). I think these two cases are just as bad, morally speaking. Why’s that? Well, first of all, what makes one case morally worse than another? Answer: Morally relevant factors (i.e. things of moral significance, things that matter). Ok, and what morally relevant factors are present here? Well, experience is certainly one - in particular the severe pain that either Bob would feel or Susie and Amy would each feel, if not helped (footnote 2). Ok. So we can say that a case in which Amy and Susie would each suffer said pain is morally worse than a case in which only Bob would suffer said pain just in case there would be more pain or greater pain in the former case than in the latter case (i.e. iff Amy’s pain and Susie’s pain would together be experientially worse than Bob’s pain.)

Footnote 2: In my response to Objection 2, it will become clear that I think something else matters too: the identity of the sufferer. In other words, I don't just think suffering matters, I also think who suffers it matters. However, unlike the morally relevant factor of suffering, I don't think it's helpful for our understanding to understand this second morally relevant factor as having an effect on the moral worseness of a case, although one could understand it this way. Rather, I think its better for our understanding to accommodate its force via the denial that we should always prevent the morally worst case (i.e. the case involving the most suffering). If you find this result deeply unintuitive, then maybe its better for your understanding to understand this second morally relevant factor as having an effect on the moral worseness of a case, which allows you to say that what we should always do is prevent the morally worse case. In any case, ignore the morally relevant factor of identity for now as I haven't even argued for why it is morally relevant. 

Here, it's helpful to keep in mind that more/greater instances of pain does not necessarily mean more/greater pain. For example, 2 very minor headaches is more instances of pains than 1 major headache, but they need not involve more pain than a major headache (i.e., they need not be experientially worse than a major headache). Thus, while there would clearly be more instances of pain in the former case than in the latter case (i.e. 2 vs 1; Amy's and Susie's vs Bob's), that does not necessarily mean that there would be more pain. 

So the key question for us then is this: Are 2 instances of a given pain, spread across two people (e.g. Amy and Susie), experientially worse (i.e. do they involve more/greater pain) than one instance of the same pain had by one person (e.g. Bob)? If they are (call this thesis “Y”), then a case in which Amy and Susie would each suffer a given pain is morally worse than a case in which only Bob would suffer the given pain. If they aren’t (call this thesis “N”), then the two cases are morally just as bad, in which case Objection 1 would fail, even if we agreed that we should prevent the morally worse case.

Here’s my argument against Y:

Suppose that 5 instances of a certain minor headache, all experienced by one person, are experientially worse than a certain major headache experienced by one person. That is, suppose that any person in the world who has an accurate idea/appreciation of what 5 instances of this certain minor headache feels like and of what this certain major headache feels like would prefer to endure the major headache over the 5 minor headaches if put to the choice. Under this supposition, someone who holds Y must also hold that 5 minor headaches, spread across 5 people, are experientially worse than a major headache had by one person. Why? Because, at bottom, someone who holds Y must also hold that 5 minor headaches spread across 5 people are experientially just as bad as 5 minor headaches all had by one person.

So let's assess whether 5 minor headaches, spread across 5 people, really are experientially worse than a major headache had by one person. Given the supposition above, consider first what makes a single person who suffers 5 minor headaches experientially worse off than a person who suffers just 1 major headache, other things being equal.

Well, imagine that we were this person who suffers 5 minor headaches. We suffer one minor headache one day, suffer another minor headache sometime after that, then another after that, etc. By the end of our 5th minor headache, we will have experienced what it’s like to go through 5 minor headaches. After all, we went through 5 minor headaches! Note that the what-it’s-like-of-going-through-5-headaches consists simply in the what-it’s-like-of-going-through-the-first-minor-headache then the what-it’s-like-of-going-through-the-second-minor-headache  then the what-it’s-like-of-going-through-the-third-minor-headache, etc. Importantly, the what-it’s-like-of-going-through-5-headaches is not whatever we experience right after having our 5th headache (e.g. exhaustion that might set in after going through many headaches or some super painful headache that is the "synthesis" of the intensity of the past 5 minor headaches). It is not a singular/continuous feeling like the feeling we have when we're experiencing a normal pain episode. It is simply this: the what-it’s-like of going through one minor headache, then another (some time later), then another, then another, then another. Nothing more. Nothing less.

Now, by the end of our 5th minor headache, we might have long forgotten about the first minor headache because, say, it happened so long ago. So, by the end of our 5th minor headache, we might not have an accurate appreciation of what it’s like to go through 5 minor headaches even though we in fact have experienced what it’s like to go through 5 minor headaches. As a result, if someone asked us whether we’ve been through more pain due to our minor headaches or more pain through a major headache that, say, we recently experienced, we would likely incorrectly answer the latter.

But, if we did have an accurate appreciation of what it’s like to go through 5 minor headaches, say, because we experienced all 5 minor headaches rather recently, then there will be a clear sense to us that going through them was experientially worse than the major headache. The 5 minor headaches would each be “fresh in our mind”, and thus the what-it’s-like-of-going-through-5-minor-headaches would be “fresh in our mind”. And with that what-it’s-like fresh in mind, it seems clear to us that it caused us more pain than the major headache did.

Now, a headache being “fresh in our mind” does not mean that the headache needs to be so fresh that it is qualitatively the same as experiencing a real headache. Being fresh in our mind just means we have an accurate appreciation/idea of what it feels like, just as we have some accurate idea of what our favorite dish tastes like.

Because we have appreciations of our past pains (to varying degrees of accuracy), we sometimes compare them and have a clear sense that one set of pains is worse than another. But it is not the comparison and the clear sense we have of one set of pains being worse than another that ultimately makes one set of pains worse than another. Rather, it is the other way around: it is the what-it’s-like-of-having-5-minor-headaches that is worse than the what-it’s-like-of-having-a-major-headache. And if we have an accurate appreciation of both what-it’s-likes, then we will conclude the same. But, when we don’t, then our own conclusions could be wrong, like in the example provided earlier of a forgotten minor headache.

So, at the end of the day, what makes a person who has 5 minor headaches worse off than a person who has 1 major headache is the fact that he experienced the what-it’s-like-of-going-through-5-minor-headaches. 

But, in the case where the 5 minor headaches are spread across 5 people, there is no longer the what-it’s-like-of-going-through-5-minor-headaches because each of the 5 headaches is experienced by a different person. As a result, the only what-it’s-like that is present is the what-it’s-like-of-experiencing-one-minor-headache. Five different people each experience this what-it’s-like, but no one experiences what-it’s-like-of-going-through-5-minor-headaches. Moreover, the what-it’s-like of each of the 5 people cannot be linked to form the what-it’s-like-of-experiencing-5-minor-headaches because the 5 people are experientially independent beings.

Now, it's clearly the case that the what-it’s-like-of-going-through-1-minor-headache is not experientially worse than the what-it’s-like-of-going-through-a-major-headache. Given what I said in the previous paragraph, therefore, there is nothing present that could be experientially worse than the what-it’s-like-to-go-through-a-major-headache in the case where the 5 minor headaches are spread across 5 people. Therefore, 5 minor headaches, spread across 5 people, cannot be (and thus, is not) worse, experientially speaking, than one major headache.

Indeed, five independent what-it's-likes-of-going-through-1-minor-headache is very different from a single what-it's-like-of-going-through-5-minor-headaches. And given a moment's reflection, one thing should be clear: only the latter what-it's-like can plausibly be experientially worse than a major headache. 

Thus, one should not treat 5 minor headaches spread across 5 people as being experientially just as bad as 5 minor headaches all had by 1 person. The latter is experientially worse than the former. The latter involves more/greater pain. 

We can thus make the following argument against Y:

P1) If Y is true, then 5 minor headaches spread across 5 people is experientially just as bad 5 minor headaches all had by 1 person.

P2) But that is not the case (since 5 minor headaches all had by 1 person is experientially worse than 5 minor headaches spread across 5 people).

C) Therefore Y is false. And therefore Objection 1 fails, even if it's granted that we should prevent the morally worse case.

Objection 1.1: (Improving it)

Objection 1.2:

One might reply that experience is a morally relevant factor, but when the amount of pain in each case is the same (i.e. when the cases are experientially just as bad), the number of people in each case also becomes a morally relevant factor. Since the case in which Amy and Susie would each suffer involves more people, therefore, it is still the morally worse case. 

My response:

I will respond to this objection in my response to Objection 2.

Objection 1.3:

One might reply that the number of people involved in each case is a morally relevant factor in of itself (i.e. completely independent of the amount of pain in each case). That is, one might say that the inherent moral relevance of the number of people involved in each case must be reconciled with the inherent moral relevance of the amount of pain in each case, and that therefore, in principle, a case in which many people would each suffer a relatively lesser pain can be morally worse than a case in which one other person would suffer a relatively greater pain, so long as there are enough people on the side of the many. For example, between helping a million people avoid depression or one other person avoid very severe depression, one might have the intuition that we should help the million, i.e. that a case in which a million people would suffer depression is morally worse.  

My response:

I don’t deny that many people have this intuition, but I think this intuition is based on a failure to recognize and/or appreciate some important facts. In particular, I think that if you really kept in the forefront of your mind the fact that not one of the million would suffer worse than the one, and the fact that the million of them together would not suffer worse than the one (assuming my response to Objection 1 succeeds), then your intuition would not be as it is (footnote 3).

Nevertheless, you might still feel that the million people should still have a chance of being helped. I agree, but this is not because of the sheer number of them involved. Rather, it is because which individual suffers matters. (Please see my response to Objection 2.)

Footnote 3: For those familiar with Derk Pereboom’s position in the free will debate, he makes an analogous point. He doesn’t think we have free will, but admits that many have the intuition that we do. But he points out that this is because we are generally not aware of the deterministic psychological/neurological/physical causes of our actions. But once we become aware of them – once we have them in the forefront of our minds – our intuition would not be that we are free. See pg 95 of “Free Will, Agency, and Meaning in Life” (Pereboom, 2014)

 

Objection 2:

One might reply that we should help Amy and Susie because either of their suffering neutralizes/cancels out Bob’s suffering, leaving the other’s suffering to carry the day in favor of helping them over Bob.

My response:

I don’t think one person’s suffering can neutralize/cancel out another person’s suffering because who suffers matters. Which individual it is that suffers matters because it is the sufferer who bears the complete burden of the suffering. It is the particular person who ends up suffering that feels all the suffering. This is an obvious fact, but it is also a very significant fact when properly appreciated, and I don’t think it is properly appreciated.

Think about it. The particular person(s) who suffers has to bear everything. If we save Amy and Susie, it is Bobthat particular vantage point on the world - who has to feel all of the suffering (which it bears remembering is suffering that would be no less painful than the suffering Amy and Susie would each otherwise endure). The same, of course, is true of each of Amy and Susie were we to save Bob.

I fear that saying anymore might make the significance of the fact I’m pointing to less clear. For those who appreciate the significance of what I’m getting at, it should be clear that neither Amy’s or Susie’s suffering can be used to neutralize/cancel out Bob’s suffering and vice versa. Yes, it’s the same kind of suffering, but it’s importantly different whether Amy and Susie each experiences it or Bob experiences it, because again, whoever experiences it is the one who has to bear all of it.

Notice that this response to objection 2 is importantly compatible with empathizing with every individual involved (e.g., Amy, Susie and Bob). Indeed, to empathize with only select individuals is biased. Yet, it seems to me that many people are in fact likely to forget to empathize with the group containing the fewer number. Note that as I understand it, to empathize with someone is to imagine oneself in their shoes and to care about that imagined perspective.

Also, notice that this response to objection 2 also deals with Objection 1.2 since this response argues against (what seems to me) the only plausible way in which the number of people involved might be thought to be relevant when the amount of pain involved in each case is the same: when the amount of pain involved in each case is the same, it might be thought that one person's pain can neutralize or cancel out another person's pain, e.g. that the suffering Amy would feel can neutralize or cancel out the suffering Bob would feel, leaving only the suffering that Susie would feel left in play, and that therefore the case in which Amy and Susie would suffer is morally worse than the case in which Bob would suffer. But if my response to Objection 2 is right, then this thought is wrong.

Just to be clear, this is not to say that I think one person’s suffering can not balance (or, in the case of greater suffering, outweigh) another person’s equal (or lesser) suffering such that the reasonable and empathetic thing to do is to give the person who would face the greater suffering a higher chance of being helped. In fact, I think it can. But balancing is not the same as neutralizing/canceling out. Bob’s suffering balances out Amy’s suffering and it also independently balances out Susie’s suffering precisely because Bob’s suffering does not get neutralized/cancelled out by either of their suffering. 

My own view is that we should give the person who would face the greater suffering a higher chance of being saved in proportion to how much greater his suffering would be relative to the suffering that the other person(s) would each otherwise face. We shouldn't automatically help him just because he would face a greater suffering if not helped. After all, who suffers matters, and this includes those who would be faced with the lesser suffering if not helped (footnote 4).

Footnote 4: My own view is slightly more complicated than this, but those details aren't important given the simple sorts of choice situations discussed in this essay.

Going back to Objection 1.3, this then explains why I agree that we should still give those who would each suffer a less serious depression a chance of being helped, even though the one other person would suffer more if not saved. Importantly, the number of people who would each suffer the less serious depression is irrelevant. I would give them a chance of being saved whether they are 2 persons or a million or a billion. How high of a chance would I give them? In proportion to how their depression compares in suffering to the single person’s severe depression. So, if it involves slightly less suffering, I would give them around 48% of being helped. If it involves a lot less suffering, then I would give them lot lower of a chance (footnote 5).

Footnote 5: Notice that with certain types of pain episodes, such as a torture episode vs a minor headache, there is such a big gap in amount of suffering between them that any clear-headed person in the world would rather endure an infinite number of minor headaches (i.e. live with very frequent minor headaches in an immortal life) than to endure the torture episode. This would explain why in a choice situation in which we can either save a person from torture or x number of persons from a minor headache (or 1 person from x minor headaches), we would just save the person who would be tortured rather than give the other(s) even the slightest chance of being helped. And I think this accords with our intuition well.

 

Objection 3:

One might reply that from “the perspective of the universe” or “moral perspective” or “objective perspective”, either of their suffering neutralizes/cancels out Bob’s suffering, leaving the other’s suffering to carry the day in favor of helping them over Bob.

My response:

As I understand it, the perspective of the universe is the impartial or unbiased perspective where personal biases are excluded from consideration. As a result, such a perspective entails that we should give equal weight to equal suffering. For example, whereas I would give more weight to my own suffering than to the equal suffering of others (due to the personal bias involved in my everyday personal perspective), if I took on the perspective of the universe, I would have to at least intellectually admit that their equal suffering matters the same amount as mine. Of course, it doesn’t matter the same amount as mine from my perspective. It matters the same amount as mine from the perspective of the universe that I have taken on. We might say it matters the same amount as mine period. However, none of this entails that, from the perspective of the universe, which individual suffers doesn’t matter – that whether it is I who suffers X or someone else who suffers X doesn’t matter. Clearly it does matter for the reason I gave earlier. Giving equal weight to equal suffering does not entail that who suffers said suffering doesn’t matter. It is precisely because it matters that in a choice situation in which we can either save person A from suffering X or person B from suffering X we think we should flip a coin to give each an equal chance of being saved, rather than, say, choosing one of them to save on a whim. This is our way of acknowledging that A suffering is importantly different from B suffering -  that who suffers matters.

Even if I'm technically wrong about what the perspective of the universe - as understood by utilitarians - amounts to, all that shows is that the perspective of the universe, so understood, is not the moral perspective. For who suffers matters (assuming my response to Objection 2 is correct), and so the moral perspective must be one from which this fact is acknowledged. Any perspective from which it isn't therefore cannot be the moral perspective. 

  

D. Conclusion:

I therefore think that according to reason and empathy, Bob should be accorded an equal chance to be helped (say via flipping a coin) as Amy and Susie. This conclusion holds regardless of the number of people that are added to Amy and Susie’s group as long as the kind of suffering remains the same. So for example, if with a $X donation we can either help Bob avoid an extremely painful disease or a million other people from the same painful disease, but not all, reason and empathy would say to flip a coin – a conclusion that is surely against effective altruism.

 

E. One final objection:

One might say that this conclusion is too counter-intuitive to be correct, and that therefore something must have gone wrong in my reasoning, even though it may not be clear what that something is.

My response:

But is it really all that counter-intuitive when we bear in mind all that I have said? Importantly, let us bear in mind three facts:

1) Were we to save the million people instead of Bob, Bob would suffer in a way that is no less painful than any one of the million others otherwise would. Indeed, he would suffer in a way that is just as painful as any one among the million. Conversely, were we to save Bob, no one among the million suffering would suffer in a way that is more painful than Bob would otherwise suffer. Indeed, the most any one of them would suffer is the same as what Bob would otherwise suffer.

2) The suffering of the million would involve no more pain than the pain Bob would feel (assuming my response to Objection 1 is correct). That is, a million instances of the given painful disease, spread across a million people, would not be experientially worse - would not involve more pain or greater pain - than one instance of the same painful disease had by Bob. (Again, keep in mind that more/greater instances of a pain does not necessarily mean more/greater pain.)

3) Were we to save the million and let Bob suffer, it is he – not you, not me, and certainly not the million of others – who has to bear that pain. It is that particular person, that unique sentient perspective on the world who has to bear it all.

In such a choice situation, reason and empathy tells me to give him an equal chance to be saved. To just save the millions seems to me to completely neglect what Bob has to suffer, whereas my approach seems to neglect no one.

2

0
0

Reactions

0
0

More posts like this

Comments124
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

One additional objection that one might have is that if Bob, Susie, and Amy all knew beforehand that you would end up in a situation where you could donate $10 to alleviate either two of them suffering or one of them suffering, but they didn't know beforehand which two people would be pitted against which one person (e.g., it could just as easily be alleviating Bob + Susie's suffering vs. alleviating Amy's suffering, or Bob + Amy's suffering vs. Susie's suffering, etc.), then they would all sign an agreement directing you to send a donation such that you would alleviate two people's suffering rather than one, since this would give each of them the best chance of having their suffering alleviated. This is related to Rawls' veil of ignorance argument.

And if Bob, Susie, Amy, and a million others were to sign an agreement directing your choice to donate $X to alleviate one person's suffering or a million peoples' suffering, again all of them behind a veil of ignorance, none of them would hesitate for a second to sign an agreement that said, "Please donate such that you would alleviate a million people's suffering, and please oh please don't just flip a coin."

More broadly spea... (read more)

2
Jeffhe
Hi Brian, Thanks for your comment and for reading my post! Here's my response: Bob, Susie and Amy would sign the agreement to save the greater number if they assumed that they each had an equal chance of being in any of their positions. But, is this assumption true? For example, is it actually the case that Bob had an equal chance to be in Amy's or Susie's position? If it is the case, then saving the greater number would in effect give each of them a 2/3 chance of being saved (the best chance as you rightly noted). But if it isn't, then why should an agreement based on a false assumption have any force? Suppose Bob, in actuality, had no chance of being in Amy's or Susie's position, then is it really in accordance with reason and empathy to save Amy and Susie and give Bob zero chance? Intuitively, for Bob to have had an equal chance of being in Amy's position or Susie's position or his actual position, he must have had an equal chance of living Amy's life or Susie's life or his actual life. That's how I intuitively understand a position: as a life position. To occupy someone's position is to be in their life circumstances - to have their life. So understood, what would it take for Bob to have had an equal chance of being in Amy's position or Susie's position or his own? Presumably, it would have had to be the case that Bob was just as likely to have been born to Amy's parents or Susie's parents or his actual parents. But this seems very unlikely because the particular “subject-of-experience” or “self” that each of us are is probably biologically linked to our ACTUAL parents' cells. Thus another parent could not give birth to us, even though they might give birth to a subjective-of-experience that is qualitatively very similar to us (i.e. same personality, same skin complexion, etc). Of course, being in someone's position need not be understood in this demanding (though intuitive) way. For example, maybe to be in Amy's position just requires being in her actual l
3
Brian Wang
I do think Bob has an equal chance to be in Amy's or Susie's position, at least from his point of view behind the veil of ignorance. Behind the veil of ignorance, Bob, Susie, and Amy don't know any of their personal characteristics. They might know some general things about the world, like that there is this painful disease X that some people get, and there is this other equally painful disease Y that the same number of people get, and that a $10 donation to a charity can in general cure two people with disease Y or one person with disease X. But they don't know anything about their own propensities to get disease X or disease Y. Given this state of knowledge, Bob, Susie, and Amy all have the same chance as each other of getting disease X vs. disease Y, and so signing the agreement is rational. Note that it doesn't have to be actually true that Bob has an equal chance as Susie and Amy to have disease X vs. disease Y; maybe a third party, not behind the veil of ignorance, can see that Bob's genetics predispose him to disease X, and so he shouldn't sign the agreement. But Bob doesn't know that; all that is required for this argument to work is that Bob, Susie, and Amy all have the same subjective probability of ending up with disease X vs. disease Y, viewing from behind the veil of ignorance. Regarding your second point, I don't think EA's are necessarily committed to saving a billion people each from a fairly painful disease vs. a single person being burned alive. That would of course depend on how painful the disease is, vs. how painful being burned alive is. To take the extreme cases, if the painful disease were like being burned alive, except just with 1% less suffering, then I think everybody would sign the contract to save the billion people suffering from the painful disease; if the disease were rather just like getting a dust speck in your eye once in your life, then probably everyone would sign the contract to save the one person being burned alive. People's
0
Jeffhe
It would be a mistake to conclude, from a lack of knowledge about one's position, that one has an equal chance of being in any one's position. Of course, if a person is behind the veil of ignorance and thus lacks relevant knowledge about his/her position, it might SEEM to him/her that he has an equal chance of being in any one's position, and he/she might thereby be led to make this mistake and consequently choose to save the greater number. In any case, what I just said doesn't really matter because you go on to say, "Note that it doesn't have to be actually true that Bob has an equal chance as Susie and Amy to have disease X vs. disease Y; maybe a third party, not behind the veil of ignorance, can see that Bob's genetics predispose him to disease X, and so he shouldn't sign the agreement. But Bob doesn't know that; all that is required for this argument to work is that Bob, Susie, and Amy all have the same subjective probability of ending up with disease X vs. disease Y, viewing from behind the veil of ignorance." Let us then suppose that Bob, in fact, had no chance of being in either Amy's or Susie's position. Now imagine Bob asks you why you are choosing to save Amy and Susie and giving him no chance at all, and you reply, "Look, Bob, I wished I could help you too but I can't help all. And the reason I'm not giving you any chance is that if you, Amy and Susie were all behind the veil of ignorance and was led to assume that each of you had an equal chance of being in anyone else's position, then all of you (including you, Bob) would have agreed to the principle of saving the greater number in the kind of case you find yourself in now." Don't you think Bob can reasonably reply, "But Brian, whether or not I make that assumption under the veil of ignorance is irrelevant. The fact of the matter is that I had no chance of being in Amy's or Susie's position. What you should do shouldn't be based on what I would agree to in a condition where I'm imagined as making a
2
Brian Wang
Regarding the first point, signing hypothetical contracts behind the veil of ignorance is our best heuristic for determining how best to collectively make decisions such that we build the best overall society for all of us. Healthy, safe, and prosperous societies are built from lots of agents cooperating; unhappy and dangerous societies arise from agents defecting. And making decisions as if you were behind the veil of ignorance is a sign of cooperation; on the contrary, Bob's argument that you should give him a 1/3 chance of being helped even though he wouldn't have signed on to such a decision behind the veil of ignorance, simply because of the actual position he finds himself in, is a sign of defection. This is not to slight Bob here -- of course it's very understandable for him to be afraid and to want a chance of being helped given his position. Rather, it's simply a statement that if everybody argued as Bob did (not just regarding charity donations, but in general), we'd be living in a much unhappier society. If you're unmoved by this framing, consider this slightly different framing, illustrated by a thought experiment: Let's say that Bob successfully argues his case to the donor, who gives Bob a 1/2 chance of being helped. For the purpose of this experiment, it's best to not specify who in fact gets helped, but rather to just move forward with expected utilities. Assuming that his suffering was worth -1 utility point, consider that he netted 1/2 of an expected utility point from the donor's decision to give everyone an equal chance. (Also assume that all realized painful incidents hereon are worth -1 utility point, and realized positive incidents are worth +1 utility point.) The next day, Bob gets into a car accident, putting both him and a separate individual (say, Carl) in the hospital. Unfortunately, the hospital is short on staff that day, so the doctors + nurses have to make a decision. They can either spend their time to help Bob and Carl with their
0
Jeffhe
Hey Brian, No worries! I've enjoyed our exchange as well - your latest response is both creative and funny. In particular, when I read "They have read your blog post on the EA forum and decide to flip a coin", I literally laughed out loud (haha). It's been a pleasure : ) If you change your mind and decide to reply, definitely feel welcome to. Btw, for the benefit of first-time readers, I've updated a portion of my very first response in order to provide more color on something that I originally wrote. In good faith, I've also kept in the response what I originally wrote. Just wanted to let you know. Now onto my response. You write, "In the donor case, Bob had a condition where he was in the minority; more often in his life, however, he will find himself in cases where he is in the majority (e.g., hospital case, loan case). And so over a whole lifetime of decisions to be made, Bob is much more likely to benefit from the veil-of-ignorance-type approach." This would be true if Bob has an equal chance of being in any of the positions of a given future trade off situation. That is, Bob would have a higher chance of being in the majority in any given future trade off situation if Bob has an equal chance of being in any of the positions of a given trade off situation. Importantly, just because there is more positions on the majority side of a trade off situation, that does not automatically mean that Bob has a higher chance of being among the majority. His probably or chance of being in each of the positions is crucial. I think you were implicitly assuming that Bob has an equal chance of being in any of the positions of a future trade off situation because he doesn't know his future. But, as I mentioned in my previous post, it would be a mistake to conclude, from a lack of knowledge about one's position, that one has an equal chance of being in any one's position. So, just because Bob doesn't know anything about his future, it does not mean that he has an equal chance
1
kbog
It's a stipulation of the Original Position, whether you look at Rawls' formulation or Harsanyi's. It's not up for debate.
0
Jeffhe
Hey kbog, Thanks for your comment. I never said it was up for debate. Rather, given that it is stipulated, I question whether agreements reached under such stipulations have any force or validity on reality, given that the stipulation is, in fact, false. Please read my second response to brianwang712 where I imagine that Bob has a conversation with him. I would be curious how you would respond to Bob in that conversation.
0
kbog
The reason that the conclusions made in such a scenario have a bearing on reality is that the conclusions are necessarily both fair and rational. My reply to Bob would be to essentially restate brianwang's original comment, and explain how the morally correct course of action is supported by a utilitarian principle of indifference argument, and that none of the things he says (like the fact that he is not Amy or Susie, or the fact that he is scared) are sound counterarguments.
0
Jeffhe
1) The reason that the conclusions made in such a scenario have a bearing on reality is that the conclusions are necessarily both fair and rational. The conclusions are rational under the stipulation that each person has an equal chance of being in anybody's position. But it is not actually rational given that the stipulation is false. So you can't just say that the conclusions have a bearing on reality because they are necessarily rational. They are rational under the stipulation, but not when you take into account what is actually the case. And I don't see how the conclusion is fair to Bob when the conclusion is based on a false stipulation. Bob is a real person. He shouldn't be treated like he had an equal chance of being in Amy's or Susie's position, when he in fact didn't. 2) "My reply to Bob would be to essentially restate brianwang's original comment..." Sorry, can you quote the part you're referring to? 3) "...and explain how the morally correct course of action is supported by a utilitarian principle of indifference argument." Can you explain what this "utilitarian principle of indifference argument" is? 4) "and that none of the things he says (like the fact that he is not Amy or Susie, or the fact that he is scared) are sound counterarguments." Please don't distort what I said. I had him say, "The fact of the matter is that I had no chance of being in Amy's or Susie's position.", which is very different from saying that he was not Amy or Susie. If he wasn't Amy or Susie, but actually had an equal chance of being either of them, then I would take the veil of ignorance approach more seriously. I added the part that he is said because I wanted it to sound realistic. It is uncharitable to assume that that forms part of my argument.
1
kbog
The argument of both Rawls and Harsanyi is not that it just happens to be rational for everybody to agree to their moral criteria; the argument is that the morally rational choice for society is a universal application of the rule which is egoistically rational for people behind the veil of ignorance. Of course it's not egoistically rational for people to give anything up once they are outside the veil of ignorance, but then they're obviously making unfair decisions, so it's irrelevant to the thought experiment. Stipulations can't be true or false - they're stipulations. It's a thought experiment for epistemic purposes. The reason we look at what they would agree to from behind the veil of ignorance as opposed to outside is that it ensures that they give equal consideration to everyone, which is a basic principle that appeals to us as a cornerstone of any decent moral system. Also, to be clear, the Original Position argument doesn't say "imagine if Bob had an equal chance of being in Amy's or Susie's position, see how you would treat them, and then treat him that way." If it did, then it would simply not work, because the question of exactly how you should actually treat him would still be undetermined. Instead, the argument says "imagine if Bob had an equal chance of being in Amy's or Susie's position, see what decision rule they would agree to, and then treat them according to that decision rule." The first paragraph of his first comment. This very idea, originally argued by Harsanyi (http://piketty.pse.ens.fr/files/Harsanyi1975.pdf).
0
Jeffhe
Hey Brian, I just wanted to note that another reason why you might not want to use the veil-of-ignorance approach to justify why we should save the greater number is that it would force you to conclude that, in a trade off situation where you can either save one person from an imminent excruciating pain (i.e. being burned alive) or another person from the same severe pain PLUS a third person from a very minor pain (e.g. a sore throat), we should save the second and third person and give 0 chance to the first person. I think it was F. M. Kamm who first raised this objection to the veil-of-ignorance approach in his book Morality, Mortality Vol 1. (I haven't actually read the book). Interestingly, kbog - another person I've been talking with on this forum - accepts this result. But I wonder if others like yourself would. Imagine Bob, Amy and Susie were in a trade off situation of the kind I just described, and imagine that Bob never actually had a chance to be in Amy's or Susie's position. In such a situation, do you think you should just save Amy and Susie?
0
Brian Wang
Yes, I accept that result, and I think most EAs would (side note: I think most people in society at large would, too; if this is true, then your post is not so much an objection to the concept of EA as it is to common-sense morality as well). It's interesting that you and I have such intuitions about such a case – I see that as in the category of "being so obvious to me that I wouldn't even have to hesitate to choose." But obviously you have different intuitions here. Part of what I'm confused about is what the positive case is for giving everyone an equal chance. I know what the positive case is for the approach of automatically saving two people vs. one: maximizing aggregate utility, which I see as the most rational, impartial way of doing good. But what's the case for giving everyone an equal chance? What's gained from that? Why prioritize "chances"? I mean, giving Bob a chance when most EAs would probably automatically save Amy and Susie might make Bob feel better in that particular situation, but that seems like a trivial point, and I'm guessing is not the main driver behind your reasoning. One way of viewing "giving everyone an equal chance" is to give equal priority to different possible worlds. I'll use the original "Bob vs. a million people" example to illustrate. In this example, there's two possible worlds that the donor could create: in one possible world Bob is saved (world A), and in the other possible world a million people are saved (world B). World B is, of course, the world that an EA would create every time. As for world A, well: can we view this possible world as anything but a tragedy? If you flipped a coin and got this outcome, would you not feel that the world is worse off for it? Would you not instantly regret your decision to flip the coin? Or even forget flipping the coin, we can take donor choice out of it; wouldn't you feel that a world where a hurricane ravaged and destroyed an urban community where a million people lived is worse than
0
Jeffhe
Hi Brian, I think the reason why you have such a strong intuition of just saving Amy and Susie in a choice situation like the one I described in my previous reply is that you believe Amy's burning to death plus Susie's sore throat involves more or greater pain than Bob's burning to death. Since you think minimizing aggregate pain (i.e. maximizing aggregate utility) is what we should do, your reason for just Amy and Susie is clear. But importantly, I don't share your belief that Amy's burning to death and Susie's sore throat involves more or greater pain than Bob's burning to death. On this note, I have completely reworked my response to Objection 1 a few days ago to make clear why I don't share this belief, so please read that if you want to know why. On the contrary, I think Amy's burning to death and Susie's sore throat involves just as much pain as Bob's burning to death. So part of the positive case for giving everyone an equal chance is that the suffering on either side would involve the same LEVEL/AMOUNT of pain (even though the suffering on Amy's and Susie's side would clearly involve more INSTANCES of pain: i.e. 2 vs 1.) But even if the suffering on Amy's and Susie's side would involve slightly greater pain (as you believe), there is a positive case for giving Bob some chance of being saved, rather than 0. And that is that who suffers matters, for the reason I offered in my response to Objection 2. I think that response provides a very powerful reason for giving Bob at least some chance, and not no chance at all, even if his pain would be less great than Amy's and Susie's together. (My response to Objection 3 makes clear that giving Bob some chance is not in conflict with being impartial, so that response is relevant too if you think doing so is being partial) At the end of the day, I think one's intuitions are based on one's implicit beliefs and what one implicitly takes into consideration. Thus, if we shared the same implicit beliefs and implicitly to

Choice situation 3: We can either save Al, and four others each from a minor headache or Emma from one major headache. Here, I assume you would say that we should save Emma from the major headache

I think you're making a mistaken assumption here about your readers. Conditional on agreeing 5 minor headaches in one person is worse than 1 major headache in one person, I would feel exactly the same if it were spread out over 5 people. I expect the majority of EAs would as well.

-1
Jeffhe
Hi Michael, Thanks very much for your response. UPDATE (ADDED ON MAR 16): I have shortened the original reply as it was a bit repetitive and made improvements in its clarity. However, it is still not optimal. Thus I have written a new reply for first-time readers to better appreciate my position. You can find the somewhat improved original reply at the end of this new reply (if interested): To be honest, I just don't get why you would feel the same if the 5 minor headaches were spread across 5 people. Supposing that 5 minor headaches in one person is (experientially) worse than 1 major headache in one person (as you request), consider WHAT MAKES IT THE CASE that the single person who suffers 5 minor headaches is worse off than a person who suffers just 1 major headache, other things being equal. Well, imagine that we were this person who suffers 5 minor headaches. We suffer one minor headache one day, suffer another minor headache sometime after that, then another after that, etc. By the end of our 5th minor headache, we will have experienced what it’s like to go through 5 minor headaches. After all, we went through 5 minor headaches! Note that the what-it’s-like-of-going-through-5-headaches consists simply in the what-it’s-like-of-going-through-the-first-minor-headache then the what-it’s-like-of-going-through-the-second-minor-headache then the what-it’s-like-of-going-through-the-third-minor-headache, etc. Importantly, the what-it’s-like-of-going-through-5-headaches is NOT whatever we experience right after having our 5th headache (e.g. exhaustion that might set in after going through many headaches or some super painful headache that is the "synthesis" of the intensity of the past 5 minor headaches). It is NOT a singular/continuous feeling like the feeling we have when we're experiencing a normal pain episode. It is simply this: the what-it’s-like of going through one minor headache, then another (sometime later), then another, then another, then another. Noth
4
Michael_S
If a small headache is worth 2 points of disutility and a large headache is worth 5, the total amount of pain is worse because 2*5>5. It's a pretty straightforward total utilitarian interpretation.I find it irrelevant whether there's one person who's worse off; the total amount of pain is larger. I'll also note that I find the concept of personhood to be incoherent in itself, so it really shouldn't matter at all whether it's the same "person". But while I think an incoherent personhood concept is sufficient for saying there's no difference if it's spread out over 5 people, I don't think it's necessary. Simple total utilitarianism gets you there.
0
Jeffhe
I assume we agree that we determine the points of disutility of the minor and major headache by how they each feel to someone. Since the major headache hurts more, it's worth more points (5 in this case). But, were a single person to suffer all 5 minor headaches, he would end up having felt what it is like to go through 5 headaches - a feeling that would make him say things like "Going through those 5 minor headaches is worse/more painful than a major headache" or "There was more/greater/larger pain in going through those 5 minor headaches than a major headache". We find these statements intelligible. But that is because we're at a point in life where we too have felt what it is like to go through multiple minor pains, and we too can consider (i.e. hold before our mind) a major pain in isolation, and compare these feelings: the what-it's-like of going through multiple minor pains vs the what-it's-like of going through a major pain. But once the situation is that the 5 minor headache are spread across 5 people, there is no longer the what-it's-like-of-going-through-5-minor-headaches, just 5 independent what-it's-likes-of-going-through-1-minor-headache. As a result, in this situation, when you say "the total amount of pain [involved in 5 minor headaches] is worse [one major headache]", or that "the total amount of pain [involved in 5 minor headaches] is larger [than one major headache], there is nothing to support their intelligibility. So, I honestly don't understand these statements. Sure, you can use numbers to show that 10 > 5, but there is no reality that that maps on to (i.e. describes). I worry that representing pain in numbers is extremely misleading in this way. Regarding personhood, I think my position just requires me to be committed to there being a single subject-of-experience (is that what you meant by person?) who extends through time to the extent that it can be the subject of more than one pain episode. I must admit I know very little about the t
1
Michael_S
I think this is confusing means of estimation with actual utils. You can estimate that 5 headaches are worse than one by asking someone to compare five headaches vs. one. You could also produce an estimate by just asking someone who has received one small headache and one large headache whether they would rather receive 5 more small headaches or one more large headache. But there's no reason you can't apply these estimates more broadly. There's real pain behind the estimates that can be added up.
0
Jeffhe
I agree with the first half of what you said, but I don't agree that "there's no reason you can't apply these estimates more broadly (e.g. to a situation where 5 minor headaches are spread across 5 persons). Sure, a person who has felt only one minor headache and one major headache can say "If put to the choice, I think I'd rather receive another major headache than 5 more minor headaches", but he says this as a result of imagining roughly what it would be like for him to go through 5 of this sort of minor headache and comparing that to what it was like for him to go through the one major headache. Importantly, what is supporting the intelligibility of his statement is STILL the what-it's-like-of-going-through-5-minor-headaches, except that this time (unlike in my previous reply), the what-it's-like-of-going-through-5-minor-headaches is imagined rather than actual. But in the situation where the 5 minor headaches are spread across 5 people, there isn't a what-it's-like-of-going-through-5-minor-headaches, imagined or actual, to support the intelligibility of the claim that 5 minor headaches (spread across 5 people) are worse or more painful than a major headache. What there is are five independent what-it's-like-of-going-through-1-minor-headache, since 1) the 5 people are obviously experientially independent of each other (i.e. each of them can only experience their own pain and no one else's), and 2) each of the 5 people experience just one minor headache. But these five independent what-it's-likes can't support the intelligibility of the above claim. None of these what-it-likes are individually worse or more painful than the major headache. And they cannot collectively be worse or more painful than the major headache because they are experientially independent of each other. The what-it's-like-of-going-through-5-minor-headaches is importantly different from five independent what-it's-like-of-going-through-1-minor-headache, and only the former can support the
2
Michael_S
It's the same 5 headaches. It doesn't matter if you're imagining one person going through it on five days or imagine five different people going through it on one day. You can still imagine 5 headaches. You can imagine what it would be like to say live the lives of 5 different people for one day with and without a minor headache. Just as you can imagine living the life of one person for 5 days with and without a headache. The connection to an individual is arbitrary and unnecessary. Now this goes into the meaningless of personhood as a concept, but what would even count as the individual in your view? For simplicity, let's say 2 modest headaches in one person are worse than one major headache. What if between the two headaches, the person gets a major brain injury and their personality is completely altered (as has happened in real life). Let's say they also have no memory of their former self. Are they no longer the same person? Under your view, is it no longer possible to say that the two modest headaches are worse than the major headache? If it still is, why is it possible after this radical change in personality with no memory continuity but impossible between two different people?
0
Jeffhe
If I'm understanding you correctly, you essentially deny that there is a metaphysical difference (i.e. a REAL difference) between A. One subject-of-experience experiencing 5 headaches over 5 days (say, one headache per day), and B. Five independent subjects-of-experience each experiencing 1 headache over 5 days (say, each subject has their 1 headache on a different day, such that on any given day, only one of them has a headache). And you deny this BECAUSE you think that, in case A for example, there simply is no fact of the matter as to how many subjects-of-experience there were over those 5 days IN THE FIRST PLACE, and NOT because you think one subject-of-experience going through 5 headaches IS IDENTICAL to five independent subjects-of-experience each going through 1 headache. Also, you are not simply saying that we don't KNOW how many subjects of experience there were over those 5 days in case A, but that there actually isn't an answer to how many there were. The indeterminate-ness is "built into the world" so to speak, and not just existing in our state of mind. You therefore think it is arbitrary to say that one subject-of-experience experienced all 5 headaches over the 5 days or that 5 subjects-of-experience each experienced 1 headache over the 5 days. But importantly, IF there is a fact of the matter as to how many subjects-of-experience there is in any given time period, you would NOT continue to think that there is no metaphysical difference between case A and B. And this is because you agree that one subject-of-experience going through 5 headaches is not identical to five independent subjects-of-experience each going through 1 headache. You would say, "Obviously they are not identical. The problem, however, is that - in case A, for example - there simply is no fact of the matter as to how many subjects-of-experience there were over those 5 days IN THE FIRST PLACE so saying that one subject-of-experience experienced all 5 headaches is arbitrary." I h
0
Michael_S
I'd say I'm making two arguments: 1) There is no distinct personal identity; rather it's a continuum. The you today is different than the you yesterday. The you today is also different from the me today. These differences are matters of degree. I don't think there is clearly a "subject of experience" that exists across time. There are too many cases (eg. brain injuries that change personality) that the single consciousness theory can't account for. 2) Even if I agreed that there was a distinct difference in kind that represented a consistent person, I don't think it's relevant to the moral accounting of experiences. Ie. I don't see why it matters whether experiences are "independent" or not. They're real experiences of pain
0
Jeffhe
1) I agree that the me today is different from the me yesterday, but I would say this is a qualitative difference, not a numerical difference. I am still the numerically same subject-of-experience as yesterday's me, even though I may be qualitatively different in various physical and psychological ways from yesterday's me. I also agree that the me today is different from the you today, but here I would say that the difference is not merely qualitative, but numerical too. You and I are numerically different subjects-of-experience, not just qualitatively different. Moreover, I would agree that our qualitative differences are a matter of degrees and not of kind. I am not a chair and you a subject-of-experience. We are both embodied subjects-of-experience (i.e. of that kind), but we differ to various degrees: you might be taller or lighter-skinned, etc I thus agreed with all your premises and have shown that they can be compatible with the existence of a subject-of-experience that extends through time. So I don't quite see a convincing argument for the lack of the existence of a subject-of-experience that extends through time. 2) So here you're granting me the existence of a subject-of-experience that extends through time, but you're saying that it makes no moral difference whether one subject-of-experience suffers 5 minor headaches or 5 numerically different subjects-of-experience each experience 1 minor headache, and that therefore, we should just focus on the number of headaches. Well, as I tried to explain in previous replies, when there is one subject-of-experience who extends through time, it is possible for him to experience what it's like of going through 5 minor headaches, since after all, he experiences all 5 minor headaches (whether he remembers experiencing them or not). Moreover, it is ONLY the what-it's-like-of-going-through-5-minor-headaches that can plausibly be worse or more painful than the what-it's-like-of-going-through-a-major-headache. In cont
0
Michael_S
1) I'd like to know what your definition of "subject-of-experience" is. 2) For this to be true, I believe you would need to posit something about "conscious experience" that is entirely different than everything else in the universe. If say factory A produces 15 widgets, factory B produces 20 widgets, and Factory C produces 15 widgets, I believe we'd agree that the number of widgets in A+C is greater than the number of widgets produced by B, no matter how independent the factories are. Do you disagree with this? Similarly, I'd say if 15 neural impulses occur in brain A, 20 in brain B, and 15 in brain C, the # of neural impulses is greater than A+C than in B. Do you disagree with this? Conscious experiences are a product of such neural chemical reactions. Do you disagree with this? Given this, It seems odd to then postulate that even though all ingredients are the same and are additive between individuals, the conscious product is not. It seems arbitrary and unnecessary to explain anything, and there is no reason to believe it is true.
0
Jeffhe
1) A subject of experience is just something which "enjoys" or has experience(s), whether that be certain visual experiences, pain experiences, emotional experiences, etc... In other words, a subject of experience is just something for whom there is a "what-it's-like". A building, a rock or a plant is not a subject of experience because it has no experience(s). That is, for example, why we don't feel concerned when we step on grass: it doesn't feel pain or feel anything. On the other hand, a cow is a subject-of-experience - it presumably has visual experiences and pain experience and all sorts of other experiences. Or more technically, a subject-of-experience (or multiple) may be realized by a cow's physical system (i.e. brain). There would be a single subject-of-experience if all the experiences realized by the cow's physical system are felt by a single subject. Of course, it is possible that within the cow's physical system's life span, multiple subjects-of-experience are realized. This would be the case if not all of the experiences realized by the cow's physical system are felt by a single subject. 2) But when we say that 5 minor headaches is "worse" or "more painful" than a major pain, we are not simply making a "greater than, less than, or equal to" number comparison like 5 minor headaches is more headaches than 1 major headaches. Clearly 5 minor headaches, whether they are spread across 5 persons or not, is more headaches than 1 major headache. But that is irrelevant. Because the claim you're making is that 5 minor headaches, whether they are spread across 5 persons or not, is WORSE or MORE PAINFUL than 1 major headache. And this is where I disagree. I am saying that for 5 minor headaches to be plausibly worse than a major headache, it must be the case that there is a what-it's-like-of-going-through-5-minor-headaches, because only THAT KIND of experience can be plausibly worse or more painful than a major headache. But, for there to be THAT KIND of experie
0
Michael_S
That's what I'm interested in a definition of. What makes it a "single subject"? How is this a binary term? I am making a greater than/less than comparison. That comparison is with pain which results from the neural chemical reactions. There is more pain (more of these chemical reactions based experiences) in the 5 headaches than there is in the 1 whether or not they occur in a single subject. I don't see any reason to treat this differently then the underlying chemical reactions. No problem on the caps.
0
Jeffhe
REVISED TO BE MORE CLEAR ON MAR 19: You also write, "There is more pain (more of these chemical reactions based experiences) in the 5 headaches than there is in the 1 whether or not they occur in a single subject. I don't see any reason to treat this differently then the underlying chemical reactions." Well, to me the reason is obvious: when we say that 5 minor pains in one person is greater than (i.e. worse than) a major pain in one person" we are using "greater than" in an EXPERIENTIAL sense. On the other hand, when we say that 10 neural impulses in one person is greater than 5 neural impulses in one person, we are using "greater than" in a QUANTITATIVE/NUMERICAL sense. These two comparisons are very different in their nature. The former is about the relative STRENGTH of the pains, the latter is about the relative QUANTITIES of neural impulses. So just because 10 neural impulses is greater than 5 neural impulses in the numerical sense, whether the 10 impulses take place in 1 brain or 5 brains, that does NOT mean that 5 minor pains is greater than 1 major headache in the experiential sense, whether the 5 minor pains are realized in 1 brain or 5 brains. This relates back to why I said it can be very misleading to represent pain comparisons in numerals like 5*2>5. Such representations do not distinguish between the two senses described above, and thus can easily lead one to conflate them.
0
Jeffhe
Just to make sure we're on the same page here, let me summarize where we're at: In choice situation 2 of my paper, I said that supposing that any person would rather endure 5 minor headaches of a certain sort than 1 major headache of a certain sort when put to the choice, then a case in which Al suffers 5 such minor headaches is morally worse than a case in which Emma suffers 1 such major headache. And the reason I gave for this is that Al's 5 minor headaches is more painful (i.e. worse) than Emma's major headache. In choice situation 3, however, the 5 minor headaches are spread across 5 different people: Al and four others. Here I claim that the case in which Emma suffers a major headache is morally worse than a case in which the 5 people each suffer 1 minor headache. And the reason I gave for this is that Emma's major headache is more painful (i.e. worse) than each of the 5 people's minor headache. Against this, you claim that if the supposition from choice situation 2 carries over to choice situation 3 - the supposition that any person would rather endure 5 minor headaches than 1 major headache if put to the choice -, then the case in which the 5 people each suffer 1 minor headache is morally worse than Emma suffering a major headache. And your reason for saying this is that you think 5 minor headaches spread across the 5 people is more painful (i.e. worse) than Emma's major headache. THAT is what I took you to mean when you wrote: "Conditional on agreeing 5 minor headaches in one person is worse than 1 major headache in one person, I would feel exactly the same if it were spread out over 5 people." As a result, this whole time, I have been trying to explain why it is that 5 minor headaches spread across five people CANNOT be more painful (i.e. worse) than a major headache, even while the same minor 5 headaches all had by one person can (and would be, under the supposition). Importantly, I never took myself to be disagreeing with you on whether 5 instances
0
Michael_S
To your first comment, I disagree. I think it's the same thing. Experiences are the result of chemical reactions. Are you advocating a form of dualism where experience is separated from the physical reactions in the brain? I think there is more total pain. I'm not counting the # of headaches. I'm talking about the total amount of pain. Can you define S1? We may not, as these discussions tend to go. I'm fine calling it. I think we have to get closer to defining a subject of experience, (S1); I think I would need this to go forward. But here's my position on the issue: I think moral personhood doesn't make sense as a binary concept (the mind from a brain is different at different times, sometimes vastly different such as in the case of a major brain injury) The matter in the brain is also different over time (ship of Theseus). I don't see a good reason to call these the same person in a moral sense in a way that two minds of two coexisting brains wouldn't be. The consciousness experiences are different between at different times and different brains; I see this as a matter of degree of similarity.
0
Jeffhe
Hi Michael, I removed the comment about worrying that we might not reach a consensus because I worried that it might send you the wrong idea (i.e. that I don't want to talk anymore). It's been tiring I have to admit, but also enjoyable and helpful. Anyways, you clearly saw my comment before I removed it. But yeah, I'm good with talking on. I agree that experiences are the result of chemical reactions, however the nature of the relations "X being experientially worse than Y" and "X being greater in number than Y" are relevantly different. Someone by the name of "kbog" recently read my very first reply to you (the updated edition) and raised basically the same concern as you have here, and I think I have responded to him pretty aptly. So if you don't mind, can you read my discussion with him: http://effective-altruism.com/ea/1lt/is_effective_altruism_fundamentally_flawed/dmu I would have answered you here, but I'm honestly pretty drained from replying to kbog, so I hope you can understand. Let me know what you think. Regarding defining S1, I don't think I can do better than to say that S1 is a thing that has, or is capable of having, experience(s). I add the phrase 'or is capable of having' this time because it has just occurred to me that when I am in dreamless sleep, I have no experiences whatsoever, yet I'd like to think that I am still around - i.e. that the particular subject-of-experience that I am is still around. However, it's also possible that a subject-of-experience exists only when it is experiencing something. If that is true, then the subject-of-experience that I am is going out of and coming into existence several times a night. That's spooky, but perhaps true. Anyways, I can't seem to figure out why you need any better of a definition of a subject-of-experience than that. I feel like my definition sufficiently distinguishes it from other kinds of things. Moreover, I have provided you with a criteria for identity over time. Shouldn't this be enoug
0
Michael_S
FYI, I'm pretty busy over the next few days, but I'd like to get back to this conversation at one point. If I do, it may be a bit though.
0
Jeffhe
No worries!
0
kbog
Because I don't have any reason to feel different. Imagine if I said, "5 headaches among tall people would be better than 5 headaches among short people." And then you said, "no, it's the same either way. Height is irrelevant." And then I replied, "I just don't get why you would feel the same if the people are tall or short!" In that case, clearly I wouldn't be giving you a response that carries any weight. If you want to show that the cases are different in a relevant way, then you need to spell it out. In the absence of reasons to say that there is a difference, we assume by default that they're similar. The third sentence does not follow from the second. This is like saying "there is nothing present in a Toyota Corolla that could make it weigh more than a Ford F-150, therefore five Toyota Corollas cannot weigh more than a Ford F-150." Just because there is no one element in a set of events that is worse than a bad thing doesn't mean that the set of events is not worse than the bad thing. There are lots of events where badness increases with composition, even without using aggregative utilitarian logic. E.g.: it is okay to have sex with Michelle, and it is okay to marry Tiffany, but it is not okay to do both.
0
Jeffhe
1) "Because I don't have any reason to feel different." Ok, well, that comes as a surprise to me. In any case, I hope after reading my first reply to Michael_S, you at least sort of see how it could be possible that someone like I would feel surprised by that, even if you don't agree with my reasoning. In other words, I hope you at least sort of see how it could be possible that someone who would clearly agree with you that, say, 5 minor headaches all had by 1 tall person is experientially just as bad as 5 minor headaches all had by 1 short person, might still disagree with you that 5 minor headaches all had by 1 person is experientially just as bad as 5 minor headaches spread across 5 people. 2) "If you want to show that the cases are different in a relevant way, then you need to spell it out. In the absence of reasons to say that there is a difference, we assume by default that they're similar." That's what my first reply to Michael_S, in effect, aimed to do. 3) "The third sentence does not follow from the second. This is like saying "there is nothing present in a Toyota Corolla that could make it weigh more than a Ford F-150, therefore five Toyota Corollas cannot weigh more than a Ford F-150." Just because there is no one element in a set of events that is worse than a bad thing doesn't mean that the set of events is not worse than the bad thing. There are lots of events where badness increases with composition, even without using aggregative utilitarian logic. E.g.: it is okay to have sex with Michelle, and it is okay to marry Tiffany, but it is not okay to do both." Your reductio-by-analogy (I made that phrase up) doesn't work, because your analogy is relevantly different. In your analogy, we are dealing with the relation of _ being heavier than _, whereas I'm dealing with the relation of _ being experientially worse than _. These relations are very different in nature: one is quantitative in nature, the other is experiential in nature. You might insist th
0
kbog
Well I can see how it is possible for someone to believe that. I just don't think it is a justified position, and if you did embrace it you would have a lot of problems. For instance, it commits you to believing that it doesn't matter how many times you are tortured if your memory is wiped each time. Because you will never have the experience of being tortured a second time. There are two rooms, painted bright orange inside. One person goes into the first room for five minutes, five people go into the second for one minute. If we define orange-perception as the phenomenon of one conscious mind's perception of the color orange, the amount of orange-perception for the group is the same as the amount of orange-perception for the one person. Something being experiential doesn't imply that it is not quantitative. We can clearly quantify experiences in many ways, e.g. I had two dreams, I was awake for thirty seconds, etc. Or me and my friends each saw one bird, and so on. Yes, but the question here is whether 5 what-it's-lies-of-going-through-1-minor-headache is 5x worse than 1 minor headache. We can believe this moral claim without believing that the phenomenon of 5 separate headaches is phenomenally equivalent to 1 experience of 5 headaches. There are lots of cases where A is morally equivalent to B even though A and B are physically or phenomenally different.
0
Jeffhe
1) "Well I can see how it is possible for someone to believe that. I just don't think it is a justified position, and if you did embrace it you would have a lot of problems. For instance, it commits you to believing that it doesn't matter how many times you are tortured if your memory is wiped each time. Because you will never have the experience of being tortured a second time." I disagree. I was precisely trying to guard against such thoughts by enriching my first reply to Michael_S with a case of forgetfulness. I wrote, "Now, by the end of our 5th minor headache, we might have long forgotten about the first minor headache because, say, it happened so long ago. So, by the end of our 5th minor headache, we might not have an accurate appreciation of what it’s like to go through 5 minor headaches EVEN THOUGH we in fact have experienced what it’s like to go through 5 minor headaches." (I added the caps here for emphasis) The point I was trying to make in that passage is that if one person (i.e. one subject-of-experience) experienced all 5 minor headaches, then whether he remembers them or not, the fact of the matter is that HE felt all of them, and insofar as he has, he is experientially worse off than someone who only felt a major headache. Of course, if you asked him at the end of his 5th minor headache whether HE thinks he's had it worse than someone with a major headache, he may say "no" because, say, he has forgotten about some of the minor headaches he's had. But that does NOT MEAN that, IN FACT, he did not have it worse. After all, the what-it's-like-of-going-through-5-minor-headaches is experentially worse than one major headache, and HE has experienced the former, whether he remembers it or not. So, if my memory is wiped each time after getting tortured, of course it still matters how many times I'm tortured. Because I WILL have the experience of being tortured a second time, whether or not I VIEW that experience as such. 2) "There are two rooms, painted
0
kbog
But I don't have an accurate appreciation of what it's like to be 5 people going through 5 headaches either. So I'm missing out on just as much as the amnesiac. In both cases people's perceptions are inaccurate. Of course you can define a relation to have that property, but merely defining it that way gives us no reason to think that it should be the focus of our moral concern. If I were to define a relation to have the property of being the target of our moral concern, it wouldn't be impacted by how it were spread across multiple people. Well, so do I. The point is that the mere fact that 5 headaches in one person is worse for one person doesn't necessarily imply that it is worse overall for 5 headaches among 5 people.
0
Jeffhe
Hi kbog, glad to hear back from you. 1) "But I don't have an accurate appreciation of what it's like to be 5 people going through 5 headaches either. So I'm missing out on just as much as the amnesiac. In both cases people's perceptions are inaccurate." I don't quite understand how this is a response to what I said, so let me retrace some things: You first claimed that if I believed that 5 minor headaches all had by one person is experientially worse than 5 minor headaches spread across 5 people, then I would be committed to "believing that it doesn't matter how many times you are tortured if your memory is wiped each time. Because you will never have the experience of being tortured a second time" and this is a problem. I replied that it does matter how many times I get tortured because even if my memory is wiped each time, it is still ME (as opposed to a numerically different subject-of-experience, e.g. you) who would experience torture again and again. If my memory is wiped, I will incorrectly VIEW each additional episode of torture as the first one I've ever experienced, but it would not BE the first one I've ever experienced. I would still experience what-it's-like-of-going-through-x-number-of-torture-episodes even if after each episode, my memory was wiped. Since it's the what-it's-like-of-going-through-x-number-of-torture-episodes (and not my memory of it) that is experientially worse than something else, and since X is morally worse than Y when X is experientially worse (i.e. involves more pain) than Y, therefore, it does matter how many times I'm tortured irrespective of my memory. Now, the fact that you said that I "will never have the experience of being tortured a second time" suggests that you think that memory-continuity is necessary to being the numerically same subject-of-experience (i.e. person). If this were true, then every time a person's memory is wiped, a numerically different person comes into existence and so no person would experience w
0
kbog
The point is that the subject has the same experiences as that of having one headache five times, and therefore has the same experiences as five headaches among five people. There isn't any morally relevant difference between these experiences, as the mere fact that the latter happens to be split among five people isn't morally relevant. So we should suppose that they are morally similar. You think it should be "involves more pain for one person than". But I think it should be "involves more pain total", or in other words I take your metric, evaluate each person separately with your metric, and add up the resulting numbers. It's just plain old cardinal utility: the sum of the amount of pain experienced by each person. Why? In the exact same way that you think they can. Correct, we haven't, because we're not yet doing any interpersonal comparisons. It is distributed - 20% of it is in each of the 5 people who are in pain.
0
Jeffhe
1) "The point is that the subject has the same experiences as that of having one headache five times, and therefore has the same experiences as five headaches among five people." One subject-of-experience having one headache five times = the experience of what-it's-like-of-going-through-5-headaches. (Note that the symbol is an equal sign in case it's hard to see.) Five headaches among five people = 5 experientially independent experiences of what-it's-like-of-going-through-1-headache. (Note the 5 experiences are experientially independent of each other because each is felt by a numerically different subject-of-experience, rather than all by one subject-of-experience.) The single subject-of-experience does not "therefore has the same experiences as five headaches among five people." 2) "You think it should be "involves more pain for one person than". But I think it should be "involves more pain total", or in other words I take your metric, evaluate each person separately with your metric, and add up the resulting numbers." Ok, and after adding up the numbers, what does the final resulting number refer to in reality? And in what sense does the referent (i.e. the thing referred to) involve more pain than a major headache? Consider the case in which the 5 minor headaches are spread across 5 people, and suppose each minor headache has an overall shittiness score of 2 and a major headache has an overall shittiness score of 6. If I asked you what '2' refers to, you'd easily answer the shitty feeling characteristic of what it's like to go through a minor-headache. And you would say something analogous for '6' if I asked you what it refers to. You then add up the five '2's and get 10. Ok, now, what does the '10' refer to? You cannot answer the shitty feeling characteristic of what it's like to go through 5 minor headaches, for this what-it's-like is not present since no individual feels all 5 headaches. The only what-it's-like that is present are 5 experientially inde
0
kbog
The fact that they are separate doesn't mean that their content is any different from the experience of the one person. Certainly, the amount of pain they involve isn't any different. The total amount of suffering. Or, the total amount of well-being. Because are multiple people and each of them has their own pain. The amount of pain experienced among five people. In the sense that each of them involves more than 1/5 as much pain, and the total pain among 5 feelings is the sum of pain in each of them. Sure it's experiential, all 10 of the pain is experienced. It's just not experienced by the same person. In the same way that there are more sheep apparitions among five people, each of them dreaming of two sheep, than for one person who is dreaming of six sheep. But as far as cardinal utility is concerned, both quantities involve the same amount of pain. That's just what you get from the definition of cardinal utility. That just means I need a different account of "involves more pain than" (which I have) when interpersonal comparisons are being made, but it doesn't mean that my account can't be the same as your account when there is only one person. But as I have been telling you this entire time, I don't follow your definition of "experientially worse than". Well, I already did. But it's really just the same as what utilitarians have been writing for centuries so it's not like I had to provide it.
0
Jeffhe
Yes, each of the 5 minor headaches spread among the 5 people are phenomenally or qualitatively the same as each of the 5 minor headaches of the one person. The fact that the headaches are spread does not mean that any of them, in themselves, feel any different from any of the 5 minor headaches of the one person. A minor headache feels like a minor headache, irrespective of who has it. Now, each such minor headache constitutes a certain amount of pain, so 5 such minor headaches constitutes five such pain contents, and in THAT sense, five times as much pain. Moreover, since there are 5 such minor headaches in each case (i.e. the 1 person case and the 5 people case), therefore, each case involves the same amount of pain. This is so even if 5 minor headaches all had by one person (i.e. the what-it's-like-of-going-through-5-minor-headaches) is experientially different from 5 minor headaches spread across 5 people (5 experientially independent what-it's-likes-of-going-through-1-minor-headache). Analogously, a visual experience of the color orange constitutes a certain amount of orange-ish feel, so 5 such visual experiences constitutes 5 such orange-ish feels, and in THAT sense, 5 times as much orange-ish feel. If one person experienced 5 such visual experiences one right after another and we recorded these experiences on an "experience recorder" and did the same with 5 such visual experiences spread among 5 people (where they each have their visual experience one right after the other), and then we played back both recordings, the playbacks viewed from the point of view of the universe would be identical: if each visual experience was 1 minute long, then both playbacks would be 5 minutes long of the same content. In this straight forward sense, 5 such visual experiences had by one person involves just as much orange-ish feel as 5 such visual experiences spread among 5 people. This is so even if the what-it's-like-of-going-through-5-such-visual-experiences is not experie
0
Alex_Barry
I just wanted to say I thought this comment did a good job explaining the basis behind your moral intuitions, which I had not really felt a strong motivation for before now. I still don't find it particularly compelling myself, but I can understand why others could find it important. Overall I find this post confusing though, since the framing seems to be "Effective Altruism is making an intellectual mistake" whereas you just actually seem to have a different set of moral intuitions from those involved in EA, which are largely incompatible with effective altruism as it currently practiced. Whilst you could describe moral differences as intellectual mistakes, this does not seem to be a standard or especially helpful usage. The comments etc. then just seem to have mostly been people explaining why they don't find your moral intuition that 'non-purely experientially determined' and 'purely experientially determined' amounts of pain cannot be compared compelling. Since we seem to have reached a point where there seems to be a fundamental disagreement about considered moral values, it does not seem that attempting to change each others minds is very fruitful. I think I would have found this post more conceptually clear if it had been structured: 1. EA conclusions actually require an additional moral assumption/axiom - and so if you don't agree with this assumption then you should not obviously follow EA advice. 2. (Optionally) Why you find the moral assumption unconvincing/unlikely 3. (Extra Optionally) Tentative suggestions for what should be done in the absence of the assumption. Where throughout the assumption is the commensuratabilitly of 'non-purely experientially determined' and 'purely experientially determined' experience. In general I am not very sure what you had in mind as the ideal outcome of this post. I'm surprised if you thought most EAs agreed with you on your moral intuition, since so much of EA is predicated on its converse (as is much of est
0
kbog
Just because two things are different doesn't mean they are incommensurate. It is easy to compare apples and oranges: for instance, the orange is healthier than the apple, the orange is heavier than the apple, the apple is tastier than the orange. You also compare two different things, by saying that a minor headache is less painful than torture, for instance. You think that different people's experiences are incommensurable, but I don't see why. In fact, there is good reason to think that any two values are necessarily commensurable. For if something has value to an agent, then it must provide motivation to them should they be perceiving, thinking and acting correctly, for that is basically what value is. If something (e.g. an additional person's suffering) does not provide additional motivation, then either I'm not responding appropriately to it or it's not a value. And if my motivation is to follow the axioms of expected utility theory then it must be a function over possible outcomes where my motivation for each outcome is a single number. And if my motivation for an outcome is a single number, then it must take the different values associated with that outcome and combine them into one figure denoting how valuable I find it overall.
1
Jeffhe
But I didn't say that. As long as two different things share certain aspects/dimensions (e.g. the aspect of weight, the aspect of nutrition, etc...), then of course they can be compared on those dimensions (e.g. the weight of an orange is more than the weight of an apple, i.e., an orange weighs more than an apple). So I don't deny that two different things that share many aspects/dimensions may be compared in many ways. But that's not the problem. The problem is that when you say that the amount of pain involved in 5 minor headaches spread among 5 people is more than the amount of pain involved in 1 major headache (i.e., 5 minor headaches spread among 5 people involves more pain than 1 major headache), you are in effect saying something like the WEIGHT of an orange is more than the NUTRITION of an apple. This is because the former "amount of pain" is used in a non-purely experiential sense while the latter "amount of pain" is used in a purely experiential sense. When I said you are comparing apples to oranges, THIS is what I meant.
0
kbog
No, I am effectively saying that the weight of five oranges is more than the weight of one orange. That is wrong. In both cases I evaluate the quality of the experience multiplied by the number of subjects. It's the same aspect for both cases. You're just confused by the fact that, in one of the cases but not the other, the resulting quantity happens to be the same as the number provided by your "purely experiential sense". If I said "this apple weighs 100 grams, and this orange weighs 200 grams," you wouldn't tell me that I'm making a false comparison merely because both the apple and the orange happen to have 100 calories. There is nothing philosophically noteworthy here, you have just stumbled upon the fact that any number multiplied by one is still one. As if that isn't decisive enough, imagine for instance that it was a comparison between two sufferers and five, rather than between one and five. Then you would obviously have no argument at all, since my evaluation of the two people's suffering would obviously not be in the "purely experiential sense" that you talk about. So clearly I am right whenever more than one person is involved. And it would be strange for utilitarianism to be right in all those cases, but not when there was just one person. So it must be right all the time.
1
Jeffhe
You'll need to read to the very end of this reply before my argument seems complete. Case 1: 5 minor headaches spread among 5 people Case 2: 1 major headache had by one person Yes, I understand that in each case, you are multiplying a certain amount of pain (determined solely by how badly something feels) by the number of instances to get a total amount of pain (determined via this multiplication), and then you are comparing the total amount of pain in each case. For example, in Case 1, you are multiplying the amount of pain of a minor headache (determined solely by how badly a minor headache feels) by the number of instances to get a total amount of pain (determined via this multiplication). Say each minor headache feels like a 2, then 2 x 5 = 10. Call this 10 “10A”. Similarly, in Case 2, you are multiplying the amount of pain of a major headache (determined solely by how badly a major headache feels) by the number of instances, in this case just 1, to get a total amount of pain (determined via this multiplication). Say the major headache feels like a 6, then 6 x 1 = 6. Call this latter 6 “6A”. You then compare the 10A with the 6A. Moreover, since the amounts of pain represented by 10A and 6A are both gotten by multiplying one dimension (i.e. amount of pain, determined purely experientially) by another dimension (instances), you claim that you are comparing things along the same dimension, namely, A. But this is problematic. To see the problem, consider Case 3: 5 minor headaches all had by 1 person. Here, like in Case 1, we can multiply the amount of pain of a minor headache (determined purely experientially) by the number of instances to get a total amount of pain (determined via this multiplication). 2 x 5 = 10. This 10 is the 10A sort. OR, unlike in Case 1, we can determine the final amount of pain not by multiplying those things, but instead in the same way we determine the amount of pain of a single minor headache, namely, by considering how badly th
0
kbog
What I am working with "at bottom" is irrelevant here, because I'm not making a comparison with it. There are lots of things we compare that involve different properties "at bottom". And obviously the comparison we care about is not merely a comparison how bad it feels for any given person. No it doesn't. That is, if I were to apply the same logic to oranges that you do to people, I would say that there is Mono-Orange-Weight, defined as the most weight that is ever present in one of a group of oranges, and Multi-Orange-Weight, defined as the total weight that is present in a group of oranges, and insist that you cannot compare one to the other, so one orange weighs the same as five oranges. Of course that would be nonsense, as it's true that you can compare orange weights. But you can see how your argument fails. Because this is all you are doing; you are inventing a distinction between "purely experiential" and "non-purely experiential" badness and insisting that you cannot compare one against the other by obfuscating the difference between applying either metric to a single entity. But that isn't how I determined that one person with a minor headache has 2 units of pain total. You are right, I am comparing one person's "non purely experiential" headache to five people's "non purely experiential" headaches. It's not reasonable to expect me to change my mind when you're repeating the exact same argument that you gave before while ignoring the second argument I gave in my comment.
0
Jeffhe
hey kbog, I didn't anticipate you would respond so quickly... I was editting my reply while you replied... Sorry about that. Anyways, I'm going to spend the next few days slowly re-reading and sitting on your past few replies in an all-out effort to understand your point of view. I hope you can do the same with just my latest reply (which I've editted). I think it needs to be read to the end for the full argument to come through. Also, just to be clear, my goal here isn't to change your mind. My goal is just to get closer to the truth as cheesy as that might sound. If I'm the one in error, I'd be happy to admit it as soon as I realize it. Hopefully a few days of dwelling will help. Cheers.
0
kbog
What? It's the dimension of weight, where the weight of 5 oranges can be more than the weight of one big orange. Weight is still weight when you are weighing multiple things together. If you don't believe me, put 5 oranges on a scale and tell me what you see. The prior part of your comment doesn't have anything to change this.
0
Jeffhe
Hi kbog, Sorry for taking awhile to get back to you – life got in the way... Fortunately, the additional time made me realize that I was the one who was confused as I now see very clearly the utilitarian sense of “involves more pain than” that you have been in favor of. Where this leaves us is with two senses of “involves more pain than” and with the question of which of the two senses is the one that really matters. In this reply, I outline the two senses and then argue for why the sense that I have been in favor of is the one that really matters. The two senses: Suppose, for purposes of illustration, that a person who experiences 5 minor toothaches is experientially just as badly off as someone who experiences a major toothache. This supposition, of course, makes use of my sense of “involves more pain than” – the sense that analyzes “involves more pain than” as “is experientially worse than”. This sense compares two what-it’s-likes (e.g., the what-it’s-like-of-going-through-5-minor-toothaches vs the what-it’s-like-of-going-through-a-major-toothache) and compares them with respect to their what-it’s-like-ness – their feel. On this sense, 5 minor toothaches all had by one person involves the same amount of pain as 1 major toothache had by one person in that the former is experientially just as bad as the latter. On your sense (though not on mine), if these 5 minor toothaches were spread across 5 people, they would still involve the same amount of pain as 1 major toothache had by one person. This is because having 1 major toothache is experientially just as bad as having 5 minor toothaches (i.e. using my sense), which entitles one to claim that the 1 major toothache is equivalent to 5 minor toothaches, since they give rise to distinct what-it’s-likes that are nevertheless experientially just as bad. At this point, it’s helpful to stipulate that one minor toothache = one base unit of pain. That is, let’s suppose that the what-it’s-like-of-going-through-one-minor-
0
kbog
The 5000 pains are only worse if 5000 minor pains experienced by one person is equivalent to one excruciating pain. If so, then 5000 minor pains for 5000 people being equivalent to one excruciating pain doesn't go against the badness of how things feel; at least it doesn't seem counterintuitive to me. Maybe you think that no amount of minor pains can ever be equally important as one excruciating pain. But that's a question of how we evaluate and represent an individual's well-being, not a question of interpersonal comparison and aggregation.
0
Jeffhe
Hey kbog, if you don't mind, let's ignore my example with the 5000 pains because I think my argument can more clearly be made in terms of my toothache example since I have already laid a foundation for it. Let me restate that foundation and then state my argument in terms of my toothache example. Thanks for bearing with me. The foundation: Suppose 5 minor toothaches had by one person is experientially just as bad as 1 major toothache had by one person. Given the supposition, you would claim: 5 minor toothaches spread among 5 people involves the same amount of pain as 1 major toothache had by one person. Let me explain what I think is your reasoning step by step: P1) 5 minor toothaches had by one person and 1 major toothache had by one person give rise to two different what-it's-likes that are nevertheless experientially JUST AS BAD. (By above supposition) (The two different what-it's-likes are: the what-it's-like-of-going-through-5-minor-toothaches and the what-it's-like-of-going-through-1-major-toothache.) P2) Therefore, we are entitled to say that 5 minor toothaches had by one person is equivalent to 1 major toothache had by one person. (By P1) P3) 5 minor toothaches spread among 5 people is 5 minor toothaches, just as 5 minor toothaches had by one person is 5 minor toothaches, so there is the same quantity of minor toothaches (or same quantity of base units of pain) in both cases. (Self-evident) P4) Therefore, we are entitled to say that 5 minor toothaches spread among 5 people is equivalent to 5 minor toothaches had by one person. (By P4) P5) Therefore, we are entitled to claim that 5 minor toothaches spread among 5 people is equivalent to 1 major toothache had by one person. (By P2 and P4) C) Therefore, 5 minor toothaches spread among 5 people involves the same amount of pain as 1 major toothache had by one person. (By P5) As the illustrated reasoning shows, 5 minor toothaches spread among 5 people involves the same amount of pain as 1 major toothach
0
kbog
No, both equivalencies are justified by the fact that they involve the same amount of base units of pain. Sure it does. The presence of pain is equivalent to feeling bad. Feeling bad is precisely what is at stake here, and all that I care about. Yes, that's what I meant when I said "that's a question of how we evaluate and represent an individual's well-being, not a question of interpersonal comparison and aggregation."
0
Jeffhe
So you're saying that just as 5 MiTs/5 people is equivalent to 5 MiTs/1 person because both sides involve the same amount of base units of pain, 5 MiTs/1 person is equivalent to 1 MaT/1 person because both sides involve the same amount of base units of pain (and not because both sides give rise to what-it's-likes that are experientially just as bad). My question to you then is this: On what basis are you able to say that 1 MaT/1 person involves 5 base units of pain? Reason S cares about the amount of base units of pain there are because pain feels bad, but in my opinion, that doesn't sufficiently show that it cares about pain-qua-how-it-feels. It doesn't sufficiently show that it cares about pain-qua-how-it-feels because 5 base units of pain all experienced by one person feels a whole heck of a lot worse than anything felt when 5 base units of pain are spread among 5 people, yet Reason S completely ignores this difference. If Reason S truly cared about pain-qua-how-it-feels, it cannot ignore this difference. I understand where you're coming from though. You hold that Reason S cares about the quantity of base units of pain precisely because pain feels bad, and that this fact alone sufficiently shows that Reason S is in harmony with the fact that we take pain to matter because of how it feels (i.e. that Reason S cares about pain-qua-how-it-feels). However, given what I just said, I think this fact alone is too weak to show that Reason S is in harmony with the fact that we take pain to matter because of how it feels. So I believe my objection stands. Have we hit bedrock?
0
kbog
Because you told me that it's the same amount of pain as five minor toothaches and you also told me that each minor toothache is 1 base unit of pain. If you mean that it feels worse to any given person involved, yes it ignores the difference, but that's clearly the point, so I don't know what you're doing here other than merely restating it and saying "I don't agree." On the other hand, you do not care how many people are in pain, and you do not care how much pain someone experiences so long as there is someone else who is in more pain, so if anyone's got to figure out whether or not they "care" enough it's you. You've pretty much been repeating yourself for the past several weeks, so, sure.
0
Jeffhe
Where in supposition or the line of reasoning that I laid out earlier (i.e. P1) through to P5)) did I say that 1 major headache involves the same amount of pain as 5 minor toothaches? I attributed that line of reasoning to you because I thought that was how you would get to C) from the supposition that 5 minor toothaches had by one person is experientially just as bad as 1 major toothache had by one person. But you then denied that that line of reasoning represents your line of reasoning. Specifically, you denied that P1) is the basis for asserting P2). When I asked you what is your basis for P2), you assert that I told you that 1 major headache involves the same amount of pain as five minor toothaches. But where did I say this? In any case, it would certainly help if you described your actual step by step reasoning from the supposition to C), since, apparently, I got it wrong. I'm not merely restating the fact that Reason S ignores this difference. I am restating it as part of a further argument against your sense of "involves more pain than" or "involves the same amount of pain as". The argument in essence goes: P1) Your sense relies on Reason S P2) Reason S does not care about pain-qua-how-it-feels (because it ignores the above stated difference). P3) We take pain to matter because of how it feels. C) Therefore, your sense is not in harmony with why pain matters (or at least why we take pain to matter). I had to restate that Reason S ignores this difference as my support for P2, so it was not merely stated. Both accusations are problematic. The first accusation is not entirely true. I don't care about how many people are in pain only in situations where I have to choose between helping, say, Amy and Susie or just Bob (i.e. situations where a person in the minority party does not overlap with anyone in the majority party). However, I would care about how many people are in pain in situations where I have to choose between helping, say, Amy and Susie or just

You think aggregating welfare between individuals is a flawed approach, such that you are indifferent between alleviating an equal amount of suffering for 1 or each of a million people.

You conclude that these values recommend giving to charities that directly address the sources of most intense individual suffering, and that between them, one should not choose by cost-effectiveness, but randomly. One should not give to say GiveDirectly, which does not directly tackle the most intense suffering.

This conclusion seems correct only for clear-cut textbook examp... (read more)

0
Jeffhe
Hi Jan, Thanks a lot for your response. I wonder if it is too big of a concession to make to say that "This conclusion seems correct only for clear-cut textbook examples." My argument against effective altruism was an attempt to show that it is theoretically/fundamentally flawed, even if (per your objection) I can't criticize the actual pattern of donation it is responsible for (e.g. pushing a lot of funding to GiveDirectly), although I will offer a response to your objection. I remember listening to a podcast featuring professor MacAskill (one of the presumed founders of EA) where he was recounting a debate he had with someone (can't remember who). That someone raised (if I remember correctly) the following objection: If there was a burning house and you could either save the boy trapped inside or a painting hanging on the wall which you could sell and use that money to save 100 kids in a third world country from a similar pain that the boy would face, you should obviously save the boy. But EA says to save the painting. Therefore EA is false. Professor MacAskill's response (if I remember correctly) was to bite the bullet and say that while it might be hard to stomach, that is really what we should do. If professor MacAskill's view represents EA's position, then I assume that if you concede that we should flip a coin in such a case, then there is an issue. Regarding whether my argument recommends anything in the real world, I think it does. First, just to be clear, since we cannot give each person a chance of being helped that is proportionate to what they have to suffer, I said that I personally would choose to use my money to help anyone among the class of people who stands to suffer the most (see Section F.). Just to be clear, I wouldn't try to give each of the people among this class an equal chance because that is equally impossible. I would simply choose to help those who I come across or know about I guess. Note that I didn't explain why I would choose

I think you are conflating EA with utilitarianism/consequentialism. To be fair this is totally understandable since many EAs are consequentialists and consequentialist EAs may not be careful to make or even see such a distinction, but as someone who is closest to being a virtue ethicist (although my actual metaethics are way more complicated) I see EA as being mainly about intentionally focusing on effectiveness rather than just doing what feels good in our altruistic endeavors.

0
Jeffhe
Hey gworley3, Here's the comment I made about the difference between effective-altruism and utilitarianism (if you're interested): http://effective-altruism.com/ea/1ll/cognitive_and_emotional_barriers_to_eas_growth/dij
0
Jeffhe
Hi gworley3, Thanks for your comment. I don't think I'm conflating EA with utilitarianism. In fact, I made a comment a few days ago specifically pointing out how they might differ under the post "Cognitive and emotional barriers to EA's growth". If you still think I'm conflating things, please point out what in specific so I can address it. Thanks.
0
kbog
That EA and utilitarianism are different is precisely the point being made here: you have given an argument against utilitarianism, but EA is not utilitarianism, so the argument wouldn't demonstrate that EA is flawed.
0
Jeffhe
Only my response to Objection 1 is more or less directed to the utilitarian. My response to Objection 2 is meant to defend against other justifications for saving the greater number, such as leximin or cancelling strategies. In any case, I think most EAs (even the non-utilitarians) will appeal to utilitarian reasoning to justify saving the greater number, so addressing utilitarian reasoning is important.
0
kbog
It's not about responses to objections, it's about the thesis itself.

If you think PETA is the best bet for reducing suffering, you might want to check out other farm animal advocacy organizations at Animal Charity Evaluators' website. The Organization to Prevent Intense Suffering (OPIS) is an EA-aligned organization which has a more explicit focus on advancing projects which directly mitigate abject and concrete suffering. You might also be interested in their work.

1
Jeffhe
Wow, their name says it all. I didn't know about OPIS - I'll definitely check them out. Will potentially be very useful for my own charitable activities. Also, thanks for the link to Animal Charity Evaluators - didn't know about them either. Although, given that the numbers don't matter to me in trade off cases, I don't know if it will make a difference. It would if it showed me that donating to another animal charity would help the EXACT same animals I'd help via donating to PETA AND then some (i.e. even more animals). If donating to another animal charity helped different animals (e.g. a different cow than a cow I would have helped by donating to PETA), then even if I can help more animals by donating to this other charity, I would have no overwhelming reason to, because the cow who I would thereby be neglecting would end up suffering no less than any one of the other animals otherwise would, and as I argued in response to Objection 2, who suffers matters. Thanks for both suggestions though, Evan! Note, I have since removed PETA from my post because the point of my post was just to question EA and not to suggest charities to donate to. Thanks for making me realize this.

I think Brian Tomasik has addressed this briefly and Nick Bostrom at greater length.

What I’ve found most convincing (quoting myself in response to a case that hinged on the similarity of the two or many experiences):

If you don’t care much more about several very similar beings suffering than one of them suffering, then you would also not care more about them, when they’re your own person moments, right? You’re extremely similar to your version a month or several months ago, probably more similar than you are to any other person in the whole world. So if

... (read more)
0
Jeffhe
Hi Telofy, Thanks for your comment, and quoting oneself is always cool (haha)/ In response, if I understand you correctly, you are saying that if I don't prefer saving many similar, though distinct, people each from a certain pain than another person from the same pain, then I have no reason to prefer saving myself from many of those pains than just one of them. I certainly wouldn't agree with that. Were I to suffer many pains, I (just me) suffers all of them in such a way that there is a very clear sense how they, cumulatively, are worse to endure than just one of them. Thus, I find intra-personal aggregation of pains intelligible. I mean, when an old man reminiscing about his past says to us, "The single worst pain I had was that one time when I got shot in the foot, but if you asked me whether I'd go through that again or all those damn'ed headaches I had over my life, I would certainly ask for the bullet.", we get it. Anyways, I think the clear sense I mentioned supports the intra-personal aggregation of pains and if pains intra-personally aggregate, then more instances of the same pain will be worse than just one instance, and so I have reason to prefer saving myself from more of them. However, in the case of many vs one other (call him "C"), the pains are spread across distinct people rather than aggregate in one person, so they cannot in the same sense be worse than the pain that C goes through. And so even if I show no preference in this case, I still have reason to show preference in the former case.
0
Dawn Drescher
Okay, curious. What is to you a “clear experiential sense” is just as clear or unclear to me no matter whether I think about the person moments of the same person or of different people. It would be interesting if there’s some systematic correlation between cultural aspects and someone’s moral intuitions on this issue – say, more collectivist culture leading to more strongly discounted aggregation and more individualist culture leading to more linear aggregation… or something of the sort. The other person I know who has this intuition is from a eastern European country, hence that hypothesis.
0
Jeffhe
Imagine you have 5 headaches, each 1 minutes long, that occur just 10 seconds apart of each other. From imagining this, you will have an imagined sense of what it's like to go through those 5 headaches. And, of course, you can imagine yourself in the shoes of 5 different friends, who we can suppose each has a single 1-minute long headache of the same kind as above. From imagining this, you will again have an imagined sense of what it's like to go through 5 headaches. If that's what you mean when you say that "the clear experiential sense is just as clear or unclear to me no matter whether I think about the person moments of the same person or of different people", then I agree. But when you imagine yourself in the shoes of those 5 friends, what is going on is that one subject-of-experience (i.e. you), takes on the independent what-it's-likes (i.e. experiences) associated with your 5 friends, and IN DOING SO, LINKS THOSE what-it's-likes - which in reality would be experimentally independent of each other - TOGETHER IN YOU. So ultimately, when you imagine yourself in the shoes of your 5 friends, you are, in effect, imagining what it's like to go through 5 headaches. But in reality, there would be no such what-it's-like among your 5 friends. The only what-it's-like that would be present would be the what-it's-like-of-going-through-1-headache, which each of your friend would experience. No one would experience the what it's like of going through 5 headaches. But that is what is needed for it to be the case that 5 such headaches can be worse than a headache that is worse than any one of them. Please refer to my conversation with Michael_S for more info.
1
Dawn Drescher
Argh, sorry, I haven’t had time to read through the other conversation yet, but to clarify, my prior was the other one – not that there is something linking the experiences of the five people but that there is very little, and nothing that seems very morally relevant – that links the experiences of the one person. Generally, people talk about continuity, intentions, and memories linking the person moments of a person such that we think of them as the same one even though all the atoms of their bodies may’ve been exchanged for different ones. In your first reply to Michael, you indicate that the third one, memories, is important to you, but in themselves I don’t feel that they confer moral importance in this sense. What you mean, though, may be that five repeated headaches are more than five times as bad as one because of some sort of exhaustion or exasperation that sets in. I certainly feel that, in my case especially with itches, and I think I’ve read that some estimates of DALY disability weights also take that into account. But I model that as some sort of ability of a person to “bear” some suffering, which gets worn down over time by repeated suffering without sufficient recovery in between or by too extreme suffering. That leads to a threshold that makes suffering below and above seem morally very different to me. (But I recognize several such thresholds in my moral intuitions, so I seem to be some sort of multilevel prioritarian.) So when I imagine what it is like to suffer headaches as bad as five people suffering one headache each, I imagine them far apart with plenty of time to recover, no regularity to them, etc. I’ve had more than five headaches in my life but no connection and nothing pathological, so I don’t even need to rely on my imagination. (Having five attacks of a frequently recurring migraine must be noticeably worse.)
0
Jeffhe
Hi Telofy, Thanks for this lucid reply. It has made me realize that it was a mistake to use the phrase "clear experiential sense" because that misleads people into thinking that I am referring to some singular experience (e.g. some feeling of exhaustion that sets in after the final headache). In light of this issue, I have written a "new" first reply to Michael_S to try to make my position clearer. I think you will find it helpful. Moreover, if you find any part of it unclear, please do let me know. What I'm about to say overlaps with some of the content in my "new" reply to Michael_S: You write that you don't see anything morally relevant linking the person moments of a single person. Are you concluding from this that there is not actually a single subject-of-experience who feels, say, 5 pains over time (even though we talk as if there is)? Or, are you concluding from this that even if there is actually just a single subject-of-experience who feels all 5 pains over time, it is morally no different from 5 subjects-of-experience who each feels 1 pain of the same sort? What matters to me at the end of the day is whether there is a single subject-of-experience who extends through time and thus is the particular subject who feels all 5 pains. If there is, then this subject experiences what it's like of going through 5 pains (since, in fact, this subject has gone through 5 pains, whether he remembers going through them or not). Importantly, the what-it's-like-of-going-through-5-pains is just the collection of the past 5 singular pain episodes, not some singular/continuous experience like an feeling of exhaustion or some super intense pain from the synthesis of the intensity of the 5 past pains. It is this what-it's-like that can plausibly be worse than the what it's like of going through a major pain. Since there could only be this what-it's-like when there is a single subject who experiences all 5 pains, therefore 5 pains spread across 5 people cannot be worse than
1
Dawn Drescher
Hi Jeff! To just briefly answer your question, “Are you concluding from this that there is not actually a single subject-of-experience”: I don’t have an intuition for what a subject-of-experience is – if it is something defined along the lines of the three characteristics of continuous person moments from my previous message, then I feel that it is meaningful but not morally relevant, but if it is defined along the lines of some sort of person essentialism then I don’t believe it exists on Occam’s razor grounds. (For the same reason, I also think that reincarnation is metaphysically meaningless because I think there is no essence to a person or a person moment besides their physical body* until shown otherwise.) * This is imprecise but I hope it’s clear what I mean. People are also defined by their environment, culture, and whatnot.
0
Jeffhe
Hi Telofy, nice to hear from you again :) You say that you have no intuition for what a subject-of-experience is. So let me say two things that might make it more obvious: 1.Here is how I defined a subject-of-experience in my exchange with Michael_S: "A subject of experience is just something which "enjoys" or has experience(s), whether that be certain visual experiences, pain experiences, emotional experiences, etc... In other words, a subject of experience is just something for whom there is a "what-it's-like". A building, a rock or a plant is not a subject of experience because it has no experience(s). That is why we don't feel concerned when we step on grass: it doesn't feel pain or feel anything. On the other hand, a cow is a subject-of-experience: it presumably has visual experiences and pain experience and all sorts of other experiences. Or more technically, a subject-of-experience (or multiple) may be realized by a cow's physical system (i.e. brain). There would be a single subject-of-experience if all the experiences realized by the cow's physical system are felt by a single subject. Of course, it is possible that within the cow's physical system's life span, multiple subjects-of-experience are realized. This would be the case if not all of the experiences realized by the cow's physical system are felt by a single subject." I later enriched the definition a bit as follows: "A subject-of-experience is a thing that has, OR IS CAPABLE OF HAVING, experience(s). I add the phrase 'or is capable of having' this time because it has just occurred to me that when I am in dreamless sleep, I have no experiences whatsoever, yet I'd like to think that I am still around - i.e. that the particular subject-of-experience that I am is still around. However, it's also possible that a subject-of-experience exists only when it is experiencing something. If that is true, then the subject-of-experience that I am is going out of and coming into existence several times a night.

Hey Jeffhe- the position you put forward looks structurally really similar to elements of Scanlon's, and you discuss a dillema that is often discussed in the context of his work (the lifeboat/the rocks example)- It also seems like given your reply to objection 3 you might really like it's approach (if you are not familiar with it already). Subsection 7 of this SEP article (https://plato.stanford.edu/entries/contractualism/) gives a good overview of the case that is tied to the one you discuss. The idea of the separateness of persons, and the idea that o... (read more)

0
Jeffhe
Hi Jonathan, Thanks for directing me to Scanlon's work. I am adequately familiar with his view on this topic, at least the one that he puts forward in What We Owe to Each Other. There, he tried to put forward an argument to explain why we should save the greater number in a choice situation like the one involving Bob, Amy and Susie, which respected the separateness of persons, but his argument has been well refuted by people like Michael Otsuka (2000, 2006). Regarding your second point, what reason can you give for giving each person less than the maximum equal chance possible (e.g. 50%) aside from wanting to sidestep a conclusion that is worrying to you? Suppose I choose to give Bob, Amy and Susie each a 1% of being saved, instead of each a 50% of being saved, and I say to them, "Hey none of you have anything to complain about because I'm technically giving each of you an equal chance, even though most likely, none of you will be saved." Each of them can reasonably protest that doing so does not treat them with the appropriate level of concern. Say then, I give each of them a 1/3 chance of being saved (as you propose we do) and again I say to them, "Hey none of you have anything to complain about because I'm technically giving each of you an equal chance". Don't you think they can reasonably protest in the same way until I give them each the maximum equal chance (i.e. 50%)? Regarding your third point, I don't see how I can divide up the groups differently. They come to me as given. For example, I can't somehow switch Bob and Amy's place such that the choice situation is one of either helping Amy or helping Bob and Susie. How would I do that?

The following is roughly how I think about it:

If I am in a situation where I need help, then for purely selfish reasons, I would prefer people-who-are-capable-of-helping-me to act in such a way that has the highest probability of helping me. Because I obviously want my probability of getting help, to be as high as possible.

Let's suppose that, as in your original example, I am one of three people who need help, and someone is thinking about whether to act in a way that helps one person, or to act in a way that helps two people. Well, if they act in a way th... (read more)

0
Jeffhe
Hi Kaj, Thanks for your response. Please refer to my conversation with brianwang712. It addresses this objection!

I used to think that a large benefit to a single person was always more important than a smaller benefit to multiple people (no matter how many people experienced the smaller benefit). That's why I wrote this post asking others for counterarguments. After reading the comments on that post (one of which linked to this article), I became persuaded that I was wrong.

Here's an additional counterargument. Let's say that I have two choices:

A. I can save 1 person from a disease that decreases her quality of life by 95%; or

B. I can save 5 people from a disease tha... (read more)

0
Jeffhe
Hi RandomEA, First of all, awesome name! And secondly, thanks for your response. My view is that we should give each person a chance of being helped that is proportionate to what they each have to suffer. It is irrelevant to me how many people there are who stand to suffer the lesser pain. So, for example, in the first choice situation you described, my intuition is to give the single person roughly slightly over a 50% chance of being saved and the others slightly under 50% of being saved. This is because the single person would suffer slightly worse than any one of the others, so the single person gets a slightly higher chance. It is irrelevant to me how many people have 90% to lose in quality of life, whether it be 5 or 5 billion. So if 760 billion people have 10% to lose where the single person has 90% to lose, my intuition is to give the single person roughly a 90% chance of being saved and the other 760 billion a 10% of being saved. In my essay, I in effect argued that everyone would have this intuition if properly appreciated the following two facts: 1. That were the 760 billion people to suffer, none of them would suffer anywhere near the amount the single person would. Conversely, were the single person to suffer, he/she would suffer so much more than any one of the 760 billion. 2. Which individual suffers matters because it is the particular individual who suffers that bears all the suffering. I assume that we should accept the intuitions that we have when we keep all the relevant facts at the forefront of our mind (i.e. when we properly appreciate them). I believe the intuitions I mentioned above (i.e. my intuitions) are the ones people would have when they do this. Regarding your second point, I have to think a little more about it!
0
RandomEA
Let's say that you have $100,000,000,000,000. For every $1,000,000,000,000 you spend on buying medicine A, the person in scenario A (from my previous comment) will have an additional 1% chance of being cured of disease A. For every $200,000,000,000 you spend on buying medicine B, a person in scenario B (from my previous comment) will have an additional 1% chance of being cured of disease B. For every $40,000,000,000 you spend on buying medicine C, a person in scenario C (from my previous comment) will have an additional 1% chance of being cured of disease C. ... For every $1.31 you spend on buying medicine R, a person in scenario R (from my previous comment) will have an additional 1% chance of being cured of disease R. Now consider a situation where you have to spend your $100,000,000,000,000 on helping one person with disease A and 5 people with disease B. Based on your response to my comment, it sounds like you would spend $51,355,000,000,000 on the person with disease A (giving her a 51.36% chance of survival) and $9,729,000,000,000 on each person with disease B (giving each of them a 48.64% chance of survival). Is that correct? Next consider a situation where you have to spend your $100,000,000,000,000 on helping one person with disease A and 762,939,453,125 people with disease R. Based on your response to my comment, it sounds like you would spend $90,476,000,000,000 on the person with disease A (giving her a 90.48% chance of surviving) and $12.48 on each person with disease R (giving each of them a 9.53% chance of surviving). Is that correct?
0
Jeffhe
The situations I focus on in my essay are trade-off choice situations, meaning that I can only choose one party to help, and not all parties to various degrees. Thus, if you have an objection to my argument, it is important that we focus on such kinds of situations. Thanks!
1
RandomEA
Yes but the situations that EAs face are much more analogous to my second set of hypotheticals. So if you want your argument to serve as an objection to EA, I think you have to explain how it applies to those sorts of cases.
0
Jeffhe
Not true. Trade off situations are literally everywhere. Whenever you donate to some charity, it is at the expense of another charity working in a different area and thus at the expense of the people who the other charity would have helped. Even with malaria, if you donate to a certain charity, you are helping the people who that charity helps at the expense of other people that another charity against malaria helps. That's the reality. And if you're thinking "Well, can't I donate some to each malaria fighting charity?", the answer is yes, but whatever money you donate to the other malaria fighting charity, it comes at the expense of helping the people who the original malaria fighting charity would have been able to help had they got all your donation and not just part of it. The trade off choice situation would be between either helping some of the people residing in the area of the other malaria fighting charity or helping some additional people residing in the area of the original malaria fighting charity. You cannot help all. In principle, as long as one doesn't have enough money to help everyone, one will always find oneself in a trade off choice situation when deciding where to donate.
0
RandomEA
I think the second set of hypotheticals does involve trade-offs. When I say that a person has an additional 1% chance of being cured, I mean that they have an additional 1% chance of receiving a medicine that will definitely cure them. If you spend more money on medicines to distribute among people with disease Q (thus increasing the chance that any given person with disease Q will be cured), you will have less money to spend on medicines to distribute among people with disease R (thus decreasing the chance that any given person with disease R will be cured). The reason I think that the second set of hypotheticals is more analogous to the situations EAs face is that there are typically already many funders in the space, meaning that potential beneficiaries often have some chance of being helped even absent your donation. It's quite rare that you choosing to fund one person over another will result in the other person having no chance at all of being helped.
0
Jeffhe
My apologies. After re-reading your second set of hypothetical, I think I can answer your questions. In the original choice situation contained in my essay, the device I used to capture the amount of chance each group would be given of being helped was independent of the donation amount. For example, in the choice situation between Bob, Amy, and Susie, the donation was $10 and the device used to give each a 50% chance of being saved from a painful disease was a coin. However, it seems like in your hypotheticals, the donation is used as the device too. That confused me at first. But yeah, at the end of the day, I would give person A a roughly 90% of being saved from his/her suffering and roughly a 10% to each of the billions of others, regardless of what the dollar breakdown would look like. So, if I understand your hypotheticals correctly, then my answer would be yes to both your original questions. I don't however see the point of using the donation to also act as the device. It seems to unnecessarily over complicate the choice situations. If your goal is to try to create a choice situation in which I have to give a vast amount of money to give person A around a 90% chance of surviving, and the objection you're thinking of raising is that it is absurd to give that much to give a single person around a 90% of being helped, then my response is: 1) Who suffers matters 2) What person A stands to suffer is far worse than what any one of the people from the competing group stands to suffer. I think if we really appreciate those two facts, our intuition is to give person A 90% and each of the others a 10%, regardless of the $ breakdown that involves. Thanks. Just noticed you expanded your comment. You write, "It's quite rare that you choosing to fund one person over another will result in the other person having no chance at all of being helped." This is not true. There will always be a person in line who isn't helped, but who would have been helped had you funded
1
RandomEA
I was simply noting the difference between our two examples. In your example, Bob has no chance of receiving help if you choose the other person. In the real world, me choosing one charity over another will not cause a specific person to have no ex-ante chance of being helped. Instead, it means that each person in the potential beneficiary population has a lower chance of being helped. I wanted my situation to be more analogous to the real world because I want to see how your principle works in practice. It's the same reason I introduced different prices into the example. Also, my comment was expanded very shortly after it was originally posted. It's possible that you saw the original one and while you were writing your response to it I posted my edit.
0
Jeffhe
Hey RandomEA, Sorry for the late reply. Well, say I'm choosing between the World Food Programme (WFP) and some other charity, and I have $30 to donate. According to WFP, $30 can feed a person for a month (if I remember correctly). If I donate to the other charity, then WFP in its next operation will have $30 less to spend on food, meaning someone who otherwise would have been helped won't be receiving help. Who that person is, we don't know. All we know is that he is the person who was next in line, the first to be turned away. Now, you disagree with this. Specifically you disagree that it could be said of any SPECIFIC person that, if I don't donate to WFP, that it would be true of THAT person that he won't end up receiving help that he otherwise would have. And this is because: 1) HE - that specific person - still had a chance of being helped by WFP even if I didn't donate the $30. For example, he might have gotten in line sooner than I'm supposing he has. And you will say that this holds true for ANY specific person. Therefore, the phrase "he won't end up receiving help" is not guaranteed. 2) Moreover, even if I do donate the $30 to WFP, there isn't any guarantee that he would be helped. For example, HE might have gotten in line way to late for an additional $30 to make a difference for him. And you will say that this holds true for ANY specific person. Therefore, the phrase "that he otherwise would have" is also not guaranteed. In the end, you will say, all that can be true of any SPECIFIC person is that my donation of $30 would raise THAT person's chance of being helped. Therefore, in the real world, you will say, there's rarely a trade-off choice situation between specific people. I am tempted to agree with that, but two points: 1) There still seems to be a trade off choice situation between specific groups of people: i.e. the group helped by WFP and the group helped by the other charity. 2) I think, at least in refugee camps, there is already a list of

I agree that aggregating suffering of different people is problematic. By necessity, it happens on a rather abstract level, divorced from the experiential. I would say that can lead to a certain impersonal approach which ignores the immediate reality of the human condition. Certainly we should be aware of how we truly experience the world.

However I think here we transcend ethics. We can't hope to resolve deep issues of of suffering within ethics, because we are somewhat egocentric beings by nature. We see only through our eyes and feel our body. I don't se... (read more)

1
Jeffhe
Hi bejaq, Thanks for your thoughtful comment. I think your first paragraph captures well why I think who suffers matters. The connection between suffering and who suffers it is to strong for the former to matter and for the latter not to. Necessarily, pain is pain for someone, and ONLY for that someone. So it seems odd for pain to matter, yet for it not to matter who suffers it. I would also certainly agree that there are pragmatic considerations that push us towards helping the larger group outright, rather than giving the smaller group a chance.

(Posted as top-level comment as I has some general things to say, was originally a response here)

I just wanted to say I thought this comment did a good job explaining the basis behind your moral intuitions, which I had not really felt a strong motivation for before now. I still don't find it particularly compelling myself, but I can understand why others could find it important.

Overall I find this post confusing though, since the framing seems to be 'Effective Altruism is making an intellectual mistake' whereas you just actually seem to have a different se... (read more)

0
kbog
Little disagreement in philosophy comes down to a matter of bare differences in moral intuition. Sometimes people are just confused.
1
Jeffhe
Hey Alex, thanks for your comment! I didn't know what the source of my disagreement with EAs would be, so I hope you can understand why I couldn't structure my post in a way that would have already taken into account all the subsequent discussions. But thanks for your suggestion. I may write another post with a much simpler structure if my discussion with kbog reaches a point where either I realize I'm wrong or he realizes he's wrong. If I'm wrong, I hope to realize it asap. Also, I agree with kbog. I think it's much likelier that one of us is just confused. Either kbog is right that there is an intelligible sense in which 5 minor headaches spread among 5 people can involve more pain than 1 major headache had by one person or he is not. After figuring that out, there is the question of which sense of "involves more pain than" is more morally important: is it the "is experientially worse than" sense or kbog's sense? Perhaps that comes down to intuitions.
0
Alex_Barry
Thanks for your reply - I'm extremely confused if you think there is no 'intelligible sense in which 5 minor headaches spread among 5 people can involve more pain than 1 major headache had by one person" since (as has been discussed in these comments) if you view/define total pain as being measured by intensity-weighted number of experiences this gives a clear metric that matches consequentialist usage. I had assumed you were arguing at the 'which is morally important' level, which I think might well come down to intuitions. I hope you manage to work it out with kblog!
1
Jeffhe
Hey Alex, Thanks for your reply. I can understand why you'd be extremely confused because I think I was in error to deny the intelligibility of the utilitarian sense of "more pain". I have recently replied to kbog acknowledging this mistake, outlining how I understand the utilitarian sense of "more pain", and then presenting an argument for why my sense of "more pain" is the one that really matters. I'd be interested to know what you think.
1
Alex_Barry
Thanks for getting back to me, I've read your reply to kblog, but I don't find your argument especially different to those you laid out previously (which given that I always thought you were trying to make the moral case should maybe not be surprising). Again I see why there is a distinction one could care about, but I don't find it personally compelling. (Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as 'two people experiencing the same pain is twice as bad as one person experiencing that pain' (there is some change from discussing 'total pain' to 'badness' here, but I think it still fits with our usage).) A couple of brief points in favour of the classical approach: * It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem). * As discussed in other comments, it also has other pleasing properties, such as the veil of ignorance as discussed in other comments. One additional thing to note is that dropping the comparability of 'non-purely experientially determined' and 'purely experientially determined' experiences (henceforth 'Comparability') does not seem to naturally lead to a specific way of evaluating different situations or weighing them against each other. For example, you suggest in your post that without Comparability the morally correct course of action would be to give each person a chance of being helped in proportion to their suffering, but this does not necessarily follow. One could imagine others who also disagreed with Comparability, but thought t
0
Jeffhe
Hey Alex, Thanks again for taking the time to read my conversation with kbog and replying. I have a few thoughts in response: When you say that many people here would embrace the assumption that "two people experiencing the same pain is twice as bad as one person experiencing that pain", are you using "bad" to mean "morally bad?" I ask because I would agree if you meant morally bad IF the single person was a subset of the two people. For example, I would agree that Amy and Susie each suffering is twice as morally bad as just Amy suffering. However, I would not agree IF the single person was not a subset of the two (e.g., if the single person was Bob). If the single person was Bob, I would think the two cases are morally just as bad. Now, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering. However, based on my preferred sense of "more pain", two people suffering involves the same amount of pain as one person suffering, irrespective of whether the single person is a subset or not. Therefore, you might wonder how I am able to arrive at the different opinions above. More specifically, if I think Amy and Susie each suffering involves the same amount of pain as just Amy suffering, shouldn't I be committed to saying that the former is morally just as bad as the latter, rather than twice as morally bad (which is what I want to say?) I don't think so. I think the pareto principle provides an adequate reason for taking Amy and Susie each suffering to be morally worse than just Amy's suffering. As Otsuka (a philosopher at Harvard) puts it, the Pareto states that “One distribution of benefits over a population is strictly Pareto superior to another distribution of benefits over that same population just in case (i) at least one person is better off under the former distribution than she would b
1
Alex_Barry
The argument is that if: * The amount of 'total pain' is determined by the maximum amount of suffering people experienced by any given person (Which I think is what you are arguing) * There could be an alien civilization containing a being experiencing more suffering than any human is capable of experiencing (you could also just use a human being tortured if you liked for a less extreme but clearly applicable case) * In this case, then the amount of 'total pain' is always at least that very large number, such that none of your actions can change it at all. * Thus (and you would disagree with this implication due to your adoption of the Pareto principle) since the level of 'total pain' is the morally important thing, all of your possible actions are morally equivalent. As I mention I think you escape this basic formulation of the problem by your adoption of the Pareto principle, but a more complicated version causes the same issue: This is essentially just applying the non-identity problem to the example above. (weirdly enough I think the best explanation I've seen of the non-identity problem is the second half of the 'the future' section of Derek Parfit wikipedia page ) The argument goes something like: * D1 If we adopt that 'total pain' is the maximal pain experienced by any person for whom we can effect how much pain their experience (an attempt to incorporate the Pareto principle into the definition for simplicity's sake). * A1 At some point in the far future there is almost certainly going to be someone experiencing extreme pain. (Even if humanity is wiped out, so most of the future has no one in it, that wiping out is likely to involve extreme pain for some). * A2 Due to chaotic nature of the world, and the strong dependence on birth timings of personal identity (if the circumstances of ones conception change even very slightly then your identity will almost certainly be completely different) any actions in the world now will within a few generation
0
Jeffhe
Thanks for the exposition. I see the argument now. You're saying that, if we determined "total pain" by my preferred approach, then all possible actions will certainly result in states of affairs in which the total pains are uniformly high with the only difference between the states of affairs being the identity of those who suffers it. I've since made clear to you that who suffers matters to me too, so if the above is right, then according to my moral theory, what we ought to do is assign an equal chance to any possible action we could take, since each possible action gives rise to the same total pain, just suffered by different individuals. Your argument would continue: Any moral theory that gave this absurd recommendation cannot be correct. Since the root of the absurdity is my preferred approach to determining total pain, that approach to determining total pain must be problematic too. My response: JanBrauner, if I remember correctly was talking about extreme unpredictability, but your argument doesn't seem to be based on unpredictability. If A1 and A2 are true, then each possible action more-or-less seems to inevitably result in a different person suffering maximal pain. Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot). I think there is no more absurdity to assigning each possible action an equal chance (assuming A1 and A2 are true) than there is in, say, flipping a coin between saving a million people on one island from being burned alive and saving one other person on another island from being burned alive. Since I don't find the latter absurd at all (keeping in mind that none of the million will suffer anything worse than the one, i.e. that the one would suffer no less than any one of the million), I would not find the former absurd either.
0
Alex_Barry
I was trying to keep the discussions of 'which kind of pain is morally relevant' and of your proposed system of giving people a chance to be helped in proportion to their suffering sperate. It might be that they are so intertwined as for this to be unproductive, but I think I would like you to response to my comment about the latter before we discuss it further. Given that you were initially arguing (with kblog etc.) for this definition of total pain, independent of any other identity considerations, this seems very relevant to that discussion. But this seems extremely far removed from any day to day intuitions we would have about morality, no? If you flipped a coin to decide whether you should murder each person you met, (a very implementable approximation of this result) I doubt many would find this justified on the basis that someone in the future is going to be suffering much more than them. The issue is this also applied to the case of deciding whether to set the island on fire at all
0
Jeffhe
I think I see the original argument you were going for. The argument against my approach-minus-the-who-suffers-matters-bit is that it renders all resulting states of affairs equally bad, morally speaking, because all resulting states of affairs would involve the same total pain. Given that we should prevent the morally worst case, this means that my approach would have it that we shouldn't take any action, and that's just absurd. Therefore, my way of determining total pain is problematic. Here "a resulting state of affairs" is broadly understood as the indefinite span of time following a possible action, as opposed to any particular point in time following a possible action. On this broad understanding, it seems undeniable that each possible action will result in a state of affairs with the same total maximal pain, since there will surely be someone who suffers maximally at some point in time in each indefinite span of time. Well, if who suffered didn't matter, then I think leximin should be used to determine which resulting state of affairs is morally worse. According to leximin, we determine which state of affairs is morally better as follows: Step 1: From each state of affairs, select a person among the worst off in that state of affairs. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 2. Step 2: From each state of affairs, select a person among the worst off in that state of affairs, except for the person who has already been selected. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 3. And so forth... According to this method, even though all resulting states of affairs will involve the same total pain, certain resulting states of affairs will be morall
0
Alex_Barry
Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!) I think thinking in terms of 'total pain' is not normally how this is approached, instead one thinks about converting each persons experience into 'utility' (or 'moral badness' etc.) on a personal level, but then aggregates all the different personal utilities into a global figure. I don't know if you find this formulation more intuitively acceptable (it is some sense feels like it respects your reason for caring about pain more). I bring this up since you are approaching this from a different angle than the usual, which makes peoples standard lines of reasoning seem more complex. I'll discuss this in a separate comment since I think it is one of the strongest argument against your position. I don't know much about the veil of ignorance, so I am happy to give you that it does not support total utilitarianism. Then I am really not sure at all what you are meaning by 'morally worse' (or 'right'!). In light of this, I am now completely unsure of what you have been arguing the entire time.
0
Jeffhe
So you're suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way: 1. Assign a moral value to each person's experiences based on its overall what-it's-like. For example, if someone is to experience 5 headaches, we are to assign a single moral value to his 5 headaches based on how experientially bad the what-it's-like-of-going-through-5-headaches is. If going through 5 such headaches is about experientially as bad as going through 1 major headache, then we would assign the same moral value to someone's 5 minor headaches as we would to someone else's 1 major headache. 2. We then add up the moral value assigned to each person's experiences to get a global moral value, and compare this moral value to the other global values corresponding to the other states of affairs we could bring about. This approach reminds me of trade-off situations that involve saving lives instead of saving people from suffering. For example, suppose we can either save Amy's and Susie's life or Bob's life, but we cannot save all. Who do we save? Most people would reason that we should save Amy's and Susie's life because each life is assigned a certain positive moral value, so 2 lives is twice the moral value as 1 life. I purposely avoided talking about trade-off situations involving saving lives because I don't think a life has moral value in itself, yet I anticipated that people would appeal to life having some sort of positive moral value in itself and I didn't want to spend time arguing about that. In any case, if life does have positive moral value in itself, then I think it makes sense to add those values just as it makes sense to add the dollar values of different merchandise. This would result in Amy's and Susie's death being a morally worse thing than Bob's death, and so I would at least agree that what we ought to do in this case wouldn't be to give everyone a 50% chance. In any case, if we assign a moral value to each per
2
Alex_Barry
On 'people should have a chance to be helped in proportion to how much we can help them' (versus just always helping whoever we can help the most). (Again, my preferred usage of 'morally worse/better' is basically defined so as to mean one always 'should' always pick the 'morally best' action. You could do that in this case, by saying cases are morally worse than one another if people do not have chances of being helped in proportion to how badly off they are. This however leads directly into my next point... ) How much would you be willing to trade off helping people verses the help being distributed fairly? e.g. if you could either have a 95% chance of helping people in proportion to their suffering, but a 5% chance of helping no one, verses a 100% chance of only helping the person suffering the most. In your reply to JanBrauner you are very willing to basically completely sacrifice this principle in response to practical considerations, so it seems possibly you are not willing to trade off any amount of 'actually helping people' in favour of it, but then it seems strange you argue for it so forcefully. As a separate point, this form of reasoning seems rather incompatible with your claims about 'total pain' being morally important, and also determined solely by whoever is experiencing the most pain. Thus, if you follow your approach and give some chance of helping people not experiencing the most pain, in the case when you do help them, the 'total pain' does not change at all! For example: * Suppose Alice is experiencing 10 units of suffering (by some common metric) * 10n people (call them group B) are experiencing 1 units of suffering each * We can help exactly one person, and reduce their suffering to 0 In this case your principle says we should give Alice a 10/(10+10n) = 1/(n+1) chance of being helped, and each person in group B a 1/(10+10n) chance of being helped. But in the case we help someone from group B the level of 'total pain' remains at 10 as
1
Jeffhe
Hey Alex! Sorry for the super late response! I have a self-control problem and my life got derailed a bit in the past week >< Anyways, I'm back :P This is an interesting question, adding another layer of chance to the original scenario. As you know, if (there was a 100% chance) I could give each person a chance of being saved in proportional to his/her suffering, I would do that instead of outright saving the person who has the worst to suffer. After all, this is what I think we should do, given that suffering matters, but who suffers also matters. Here, there seems to me a nice harmony between these two morally relevant factors – the suffering and the identity of who suffers, where both have a sufficient impact on what we ought to do: we ought to give each person a chance of being saved because who suffers matters, but each person’s chance ought to be in proportion to what he/she has to suffer because suffering also matters. Now you’re asking me what I would do if there was only a 95% chance that I could give each person a chance of being saved in proportion to his/her suffering with a 5% chance of not helping anyone at all: would I accept the 95% chance or outright save the person who has the worst to suffer? Well, what should I do? I must admit it’s not clear. I think it comes down to how much weight we should place on the morally relevant factor of identity. The more weight it has, the more likely the answer is that we should accept the 95% chance. I think it’s plausible that it has enough weight such that we should accept a 95% chance, but not a 40% chance. If one is a moral realist, one can accept that there is a correct objective answer yet not know what it is. One complication is that you mention the notion of fairness. On my account of what matters, the fair thing to do – as you suggest - seems to be to give each person a chance in proportion to his/her suffering. Fairness is often thought of as a morally relevant factor in of itself, but if what the fa
0
Alex_Barry
Well most EAs, probably not most people :P But yes, I think most EAs apply this 'merchandise' approach weighed by conscious experience. In regards to your discussion of moral theories, side constraints: I know there are a range of moral theories that can have rules etc. My objection was that if you were not in fact arguing that total pain (or whatever) is the sole determiner of what action is right then you should make this clear from the start (and ideally baked into what you mean by 'morally worse'). Basically I think sentences like: are sufficiently far from standard usage (at least in EA circles) you should flag up that you are using 'morally worse' in a nonstandard way (and possibly use a different term). I have the intuition that if you say "X is the morally relevant factor" then which actions you say are right will depend solely on how they effect X. Hence if you say 'what is morally relevant is the maximal pain being experienced by someone' when I expect all I need to tell you abut actions for you to decide between them is how they effect the maximal pain being experienced by someone. Obviously language is flexible but I think if you deviate from this without clear disclaimers it is liable to cause confusion. (Again, at least in EA circles). I think your argument that people should have a chance to be helped in proportion to how much we could help them is completely separate from your point about Comparability, and we should keep the discussions separate to avoid the chance of confusion. I'll make a separate comment to discuss it,
0
Jeffhe
FYI, I have since reworded this as "So you're suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:" I think it is a more precise formulation. In any case, we're on the same page. The way I phrased Objection 1 was as follows: "One might reply that two instances of suffering is morally worse than one instance of the same kind of suffering and that we should prevent the morally worse case (e.g., the two instances of suffering), so we should help Amy and Susie." Notice that this objection in argument form is as follows: P1) Two people suffering a given pain is morally worse than one other person suffering the given pain. P2) We ought to prevent the morally worst case. C) Therefore, we should help Amy and Susie over Bob. My argument with kbog concerns P1). As I mentioned, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering. Given this premise, I've been arguing that two people suffering a given pain does not involve more pain than one person suffering the given pain, and thus P1) is false. And kbog has been arguing that two people suffering a given pain does involve more pain than one person suffering the given pain, and thus P1) is true. Of course, both of us are right on our respective preferred sense of "involves more pain than". So I recently started arguing that my sense is the sense that really matters. Anyways, notice that P2) has not been debated. I understand that consequentialists would accept P2). But for other moral theorists, they would not because not all things that they take to matter (i.e. to be morally relevant, to have moral value, etc) can be baked into/captured by the moral worseness/goodness of a state of affairs. Thus, it seems natural for them to talk of side constraints, etc. For me, two things matt
0
Alex_Barry
Some of your quotes are broken in your comment, you need a > for each paragraph (and two >s for double quotes etc.) I know for most of your post you were arguing with standard definitions, but that made it all the more jarring when you switched! I actually think most (maybe all?) moral theories can be baked into goodness/badness of sates of affairs. If you want incorporate a side-constraint you can just define any state of affairs in which you violate that constraint as being worse than all other states of affairs. I do agree this can be less natural, but the formulations are not incompatible. In any case as I have given you plenty of other comment threads to think about I am happy to leave this one here - my point was just a call for clarity.
1
Jeffhe
I certainly did not mean to cause confusion, and I apologize for wasting any of your time that you spent trying to make sense of things. By "you switched", do you mean that in my response to Objection 1, I gave the impression that only experience matters to me, such that when I mentioned in my response to Objection 2 that who suffers matters to me too, it seems like I've switched? And thanks, I have fixed the broken quote. Btw, do you know how to italicize words?
0
Alex_Barry
Yes, "switched" was a bit strong, I meant that by default people will assume a standard usage, so if you only reveal later that actually you are using a non-standard definition people will be surprised. I guess despite your response to Objection 2 I was unsure in this case whether you were arguing in terms of (what are at least to me) conventional definitions or not, and I had assumed you were. To italicize works puts *s on either side, like *this* (when you are replying to a comment there is a 'show help' button that explains some of these things.)
1
Jeffhe
I see the problem. I will fix this. Thanks.

But that seems counter to what reason and empathy would lead me to do.

What? It seems to be exactly what reason and empathy would lead one to do. Reason and empathy don't tell you to arbitrarily save fewer people. At best, you could argue that empathy pulls you in neither direction, while conceding that it's still more reasonable to save more rather than fewer. You've not written an argument, just a bald assertion. You're dressing it up to look like a philosophical argument, but there is none.

P1. The degree of suffering in the case of Amy and Susie wou

... (read more)
0
Jeffhe
1) "Reason and empathy don't tell you to arbitrarily save fewer people." I never said they tell me to arbitrarily save fewer people. I said that they tell us to give each person an equal chance of being saved. 2) "This doesn't answer the objection." That premise (as indicated by "P1."), plus my support for that premise, was not meant to answer an objection. It was just the first premise of an argument that was meant to answer objection 1. 3) "There is more suffering when it happens to two people, and more suffering is morally worse." Yes, there is more instances of suffering. But as I have tried to argue, x instances of suffering spread across x people is just as morally bad as 1 instance of the same kind of suffering had by one other person. If by 'more suffering' you meant worse suffering in an experiential sense, then please see my first response to Michael. 4) "The fact that the level of suffering in each person is the same doesn't imply that they are morally equivalent outcomes." I didn't say it was implied. If I thought it was implied, then my response to Objection 1 would have been much shorter. 5) "This is a textbook case of begging the question." I don't see how my assumption is anywhere near what I want to conclude. It seems to me like an assumption that is plausibly shared by all. That's why I assumed it in the first place: to show that my conclusion can be arrived at from shared assumptions. 6) "No one you're arguing with will grant that we should act differently for cases 2 and 3." I would hesitate to use "No one". If this were true, then I would have expected more comments along those lines. More importantly, I wonder why one wouldn't grant that we should act differently in choice situations 2 and 3. If the reason boils down to the thought that 5 minor pains is experientially worse than 1 major pain, regardless if the 5 minor pains are all had felt by one person or spread across 5 different people, then I would point you to my conversation w
0
kbog
But that involves arbitrarily saving fewer people. I mean, you could call that non-arbitrary, since you have some kind of reason for it, but it's fewer people all the same, and it's not clear how reason or empathy would generally lead one to do this. So there is no prima facie case for the position that you're defending. But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people, which presupposes that more total suffering does not necessarily imply worseness in such gedanken. But you need to defend such an implication if you wish to claim that it is not morally worse for more people to suffer an equal amount. Because anyone who buys the basic arguments for helping more people rather than fewer will often prefer to alleviate five minor headaches rather than one major one, regardless of whether they happen to different people or not. OK, well: it's not. Because there is no reason for the distribution of certain wrongs across different people to affect the badness of those wrongs, as our account of the badness of those wrongs does not depend on any facts about the particular people to whom they occur. brianwang712's response based on the Original Position implies that the decision to not prevent 5 minor headaches is wrong, even though he didn't take the time to spell it out. Look, your comments towards him are very long and convoluted. I'm not about to wade through it just to find the specific 1-2 sentences where you go astray. Especially when you stuff posts with "updates" alongside copies of your original comments, I find it almost painful to look through. I don't see why identifying with helping the less fortunate (something which almost everybody does, in some fashion or other) implies that we should hold philosophical arguments to gentle standards. The time and knowledge of people who help the less fortunate is particularly valuable, so one should be wi
0
Jeffhe
1) "But that involves arbitrarily saving fewer people. I mean, you could call that non-arbitrary, since you have some kind of reason for it, but it's fewer people all the same, and it's not clear how reason or empathy would generally lead one to do this. So there is no prima facie case for the position that you're defending." To arbitrarily save fewer people is to save them on a whim. I am not suggesting that we should save them on a whim. I am suggesting that we should give each person an equal chance of being saved. They are completely different ideas. 2) "But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people, which presupposes that more total suffering does not necessarily imply worseness in such gedanken." Please show me where I supposed that 5 minor headaches are MORALLY worse when they happen to one person than when they happen to multiple people. In both choice situations 2 and 3, I provided REASONS for saying A) why 5 minor headaches all had by one person is morally worse than 1 major headache had by one person, and B) why 1 major headache had by one person is morally worse than 5 minor headaches spread across 5 people. From A. and B., you can infer that I believe 5 minor headaches all had by one person is morally worse than 5 minor headaches spread across 5 persons, but don't say that I supposed this. I provided reasons. You can reject those reasons, but that is a different story. If you meant that I supposed that 5 minor headaches are EXPERENTIALLY worse when they happen to one person than when they happen to multiple people, sure, it can be inferred from what I wrote that I was supposing this. But importantly, to make this assumption is not a stretch as it seems (at least to me) like an assumption plausibly shared by many. But it turns out that Michael_S disagreed, at which time I was glad to defend this assumption. More importantly, even if
0
kbog
You simply assert that we would rather save Emma's major headache rather than five minor ones in case 3. But if you've stipulated that people would rather endure one big headache than five minor ones, then the big headache has more disutility. Just because the minor ones are split among different people doesn't change the story. I just don't follow the argument here. My whole point here is that your response to Objection 1 doesn't do any work to convince us of your premises regarding the headaches. Yeah there's an argument, but its premise is both contentious and undefended. I'm not just speaking for utilitarians, I'm speaking for anyone who doesn't buy the premise for choice 3. I expect that lots of non-utilitarians would reject it as well. The original position argument is not an empirical prediction of what humans would choose in such-and-such circumstances, it's an analysis of what we would expect of them as the rational thing to do, so the hedonist utilitarian points out that risk aversion violates the axioms of expected utility theory and it would be rational of people to not make that choice, whereas the preference utilitarian just calibrates the utility scale to people's preferences anyway so that there isn't any dissonance between what people would select and what utilitarianism says.
0
Jeffhe
1) "You simply assert that we would rather save Emma's major headache rather than five minor ones in case 3. But if you've stipulated that people would rather endure one big headache than five minor ones, then the big headache has more disutility. Just because the minor ones are split among different people doesn't change the story. I just don't follow the argument here." I DO NOT simply assert this. In case 3, I wrote, "Here, I assume you would say that we should save Emma from the major headache or at least give her a higher chance of being saved because a major headache is morally worse than 5 minor headaches spread across 5 persons and it's morally worse BECAUSE a major headache hurts more (in some non-arbitrary sense) than the 5 minor headaches spread across 5 people. Here, the non-arbitrary sense is straightforward: Emma would be hurting more than any one of the 5 others who would each experience only 1 minor headache." (I capped 'because' for emphasis here) You would not buy that reason I gave (because you believe 5 minor headaches, spread across 5 people, is experientially worse than a major headache), but that is a different story. If you were more charitable and patient while reading my post, thinking about who my audience is (many of whom aren't utilitarians and don't buy into interpersonal aggregation of pains) etc, I doubt you would be leveling all the accusations you have against me. It wastes both your time and my time to have to deal with them. 2) "My whole point here is that your response to Objection 1 doesn't do any work to convince us of your premises regarding the headaches. Yeah there's an argument, but its premise is both contentious and undefended." I was just using your words. You said "But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people." As I said, I assumed a premise that I thought the vast majority of my audience would agree
0
kbog
But if anyone did accept that premise then they would already believe that the number of people suffering doesn't matter, just the intensity. In other words, the only people to whom this argument applies are people who would agree with you in the first place that Amy and Susie's suffering is not a greater problem than Bob's suffering. So I can't tell if it's actually doing any work. If not, then it's just adding unnecessary length. That's what I mean when I say that it's too long. Instead of adding the story with the headaches in a separate counterargument, you could have just said all the same things about Amy and Susie and Bob's diseases in the first place, making your claim that Amy and Susie's diseases are not experientially worse than Bob's disease and so on. PU says that we should assign moral value on the basis of people's preferences for them. So if someone thinks that being tortured is really really really bad, then we say that it is morally really really really bad. We give the same weight to things that people do. If you say that someone is being risk-averse, that means (iff you're using the term correctly) that they're putting so much effort into avoiding a risk that they are reducing their expected utility. That means that they are breaking at least one of the axioms of the Von Neumann-Morgenstern Utility Theorem, which (one would argue, or assert) means that they are being irrational. Yes to both.
0
Jeffhe
1) "But if anyone did accept that premise then they would already believe that the number of people suffering doesn't matter, just the intensity. In other words, the only people to whom this argument applies are people who would agree with you in the first place that Amy and Susie's suffering is not a greater problem than Bob's suffering. So I can't tell if it's actually doing any work. If not, then it's just adding unnecessary length. That's what I mean when I say that it's too long. Instead of adding the story with the headaches in a separate counterargument, you could have just said all the same things about Amy and Susie and Bob's diseases in the first place, making your claim that Amy and Susie's diseases are not experientially worse than Bob's disease and so on." The reason why I discussed those three cases was to answer the basic question: what makes one state of affairs morally worse than another. Indeed, given my broad audience, some who have no philosophy background, I wanted to start from the ground up. From that discussion, I gathered two principles that I used to support premise 2 of my argument against Objection 1. I say "gathered" and not "deduced" because you actually don't disagree with those two principles, even though you disagree with an assumption I made in one of the cases (i.e. case 3). What your disagreement with that assumption indicates is a disagreement with premise 1 of my argument against Objection 1. P1. read: "The degree of suffering in the case of Amy and Susie would be the same as in the case of Bob, even though the number of instances of suffering would differ (e.g., 2:1)." You disagree because you think Amy's and Susie's pains would together be experientially worse than Bob's pain. All this is to say that I don't think the discussion of the 3 cases was unnecessary, because it served the important preliminary goal of establishing what makes one state of affairs morally worse than another. However, it seems like I really should
0
kbog
But you are trying to argue about what makes one state of affairs morally worse than another. That is what you are trying to do in the first place. So it's not, and cannot be, preliminary. And if you started from the ground up then it would have contained something that carried force to utilitarians for instance. If you disagree, try to sketch out a view (that isn't blatantly logically inconsistent) where someone would have agreed with you on Amy/Susan/Bob but disagreed on the headaches. How is it biting a bullet to prefer to save one person being tortured AND one person with a headache, compared to simply saving one person being tortured? I struggle to see how anyone might find that position counterintuitive. Rather, accepting the converse choice seems like biting the bullet. Making the other choice also gives someone no chance of being saved from torture, and it also gives someone no chance of being saved from a headache, so I don't see what could possibly lead one to prefer it. And merely having a "chance" of being saved is morally irrelevant. Chances are not things that exist in physical or experiential terms the way that torture and suffering do. No one gives a shit about merely having a chance of being saved; someone who had a chance of being saved and yet is not saved is no better off than someone who had no chance of being saved from the beginning. The reason that we value a chance of being saved is that it may lead to us actually being saved. We don't sit on the mere fact of the chance and covet it as though it were something to value on its own.
0
Jeffhe
1) "But you are trying to argue about what makes one state of affairs morally worse than another. That is what you are trying to do in the first place. So it's not, and cannot be, preliminary. And if you started from the ground up then it would have contained something that carried force to utilitarians for instance. If you disagree, try to sketch out a view (that isn't blatantly logically inconsistent) where someone would have agreed with you on Amy/Susan/Bob but disagreed on the headaches." Arguing for what factors are morally relevant in determining whether one case is morally worse than another is preliminary to arguing that some specific case (i.e. Amy and Susie suffering) is morally just as bad as another specific case (i.e. Bob suffering). My 3 cases were only meant to do the former. From the 3 cases, I concluded: 1. That the amount of pain is a morally relevant factor in determining whether one case is morally worse than another. 2. That the number of instances of pain is a morally relevant factor only to the extent that they affect the amount of pain at issue. (i.e. the number of instances of pain is not morally relevant in itself). I take that to be preliminary work. Where I really dropped the ball was in my lackluster argument for P1 (and, likewise, for my assumption in case 3). No utilitarian would have found it convincing, and thus I would not have succeeded in convincing them that the outcome in which Amy and Susie both suffer is morally just as bad as the outcome in which only Bob suffers, even if they agreed with 1. and 2., which they do. Anyways, to the extent that you think my argument for P1 sucked to the point where it was like I was begging the question against the utilitarian, I'm happy to concede this. I have since reworked my response to Objection 1 as a result, thanks to you. 2) "How is it biting a bullet to prefer to save one person being tortured AND one person with a headache, compared to simply saving one person being tortured?
0
kbog
Your scenario didn't say that probabilistic strategies were a possible response, but suppose that they are. Then it's true that, if I choose a 100% strategy, the other person has 0% chance of being saved, whereas if I choose a 99% strategy, the other person has a 1% chance of being saved. But you've given no reason to think that this would be any better. It is bad that one person has a 1% greater chance of torture, but it's good that the other person has 1% less chance of torture. As long as agents simply have a preference to avoid torture, and are following the axioms of utility theory (completeness, transitivity, substitutability, decomposability, monotonicity, and continuity) then going from 0% to 1% is exactly as good as going from 99% to 100%. That's not true. I deny the first person any chance of being helped from torture because it denies the second person any chance of being tortured and it saves the 3rd person from an additional minor pain. I really don't see it as extreme. I'm not sure that many people would. First, I don't see how either of these claims imply that the right answer is 50%. Second, for B), you seem to be simply claiming that interpersonal aggregation of utility is meaningless, rather than making any claims about particular individuals' suffering being more or less important. The problem is that no one is claiming that anyone's suffering will disappear or stop carrying moral force, rather we are claiming that each person's suffering counts for a reason while two reasons pointing in favor of a course of action are stronger than one reason. Again I cannot tell where you got these numbers from. But it does mean that they don't care. If agents don't have special preferences over the chances of the experiences that they have then they just have preferences over the experiences. Then, unless they violate the von Neumann-Morgenstern utility theorem, their expected utility is linear with the probability of getting this or that experience, as o
More from Jeffhe
Curated and popular this week
Relevant opportunities