Comment author: Jeffhe  (EA Profile) 12 April 2018 10:37:12PM *  0 points [-]

Hey Alex,

Thanks again for taking the time to read my conversation with kbog and replying. I have a few thoughts in response:

(Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as 'two people experiencing the same pain is twice as bad as one person experiencing that pain' (there is some change from discussing 'total pain' to 'badness' here, but I think it still fits with our usage).)

When you say that many people here would embrace the assumption that "two people experiencing the same pain is twice as bad as one person experiencing that pain", are you using "bad" to mean "morally bad?"

I ask because I would agree if you meant morally bad IF the single person was a subset of the two people. For example, I would agree that Amy and Susie each suffering is twice as morally bad as just Amy suffering. However, I would not agree IF the single person was not a subset of the two (e.g., if the single person was Bob). If the single person was Bob, I would think the two cases are morally just as bad.

Now, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.

However, based on my preferred sense of "more pain", two people suffering involves the same amount of pain as one person suffering, irrespective of whether the single person is a subset or not.

Therefore, you might wonder how I am able to arrive at the different opinions above. More specifically, if I think Amy and Susie each suffering involves the same amount of pain as just Amy suffering, shouldn't I be committed to saying that the former is morally just as bad as the latter, rather than twice as morally bad (which is what I want to say?)

I don't think so. I think the pareto principle provides an adequate reason for taking Amy and Susie each suffering to be morally worse than just Amy's suffering. As Otsuka (a philosopher at Harvard) puts it, the Pareto states that “One distribution of benefits over a population is strictly Pareto superior to another distribution of benefits over that same population just in case (i) at least one person is better off under the former distribution than she would be under the latter and (ii) nobody is worse off under the former than she would be under the latter." Since just Amy suffering (i.e. Susie not suffering) is Pareto superior to Amy and Susie each suffering, therefore just Amy suffering is morally better than Amy and Susie each suffering. In other words, Amy and Susie each suffering is morally worse than just Amy suffering. Notice, however, that if the single person were Bob, condition (ii) would not be satisfied because Bob would be made worse off. The Pareto principle is based on the appealing idea that we shouldn't begrudge another person an improvement that costs us nothing. Amy shouldn't begrudge Susie an improvement that costs her nothing.

Anyways, I just wanted to make that aspect of my thinking clear. So I would agree with you that more people suffering is morally worse than fewer people suffering as long as the smaller group of people is a subset of the larger group, due to the Pareto principle. But I would not agree with you that more people suffering is morally worse than fewer people suffering if those fewer people are not a subset of the larger group, since the Pareto principle is not a basis for it, nor is there more pain in the former case than the latter case on my preferred sense of "more pain". And since I think my preferred sense of "more pain" is the one that ultimately matters because it respects the fact that pain matters solely because of how it feels, I think others should agree with me.

A couple of brief points in favour of the classical approach: It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).

I'm not sure I see the advantage here, or what the alleged advantage is. I don't see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.

As discussed in other comments, it also has other pleasing properties, such as the veil of ignorance as discussed in other comments.

The veil of ignorance approach at minimum supports a policy of helping the greater number (given the stipulation that each person has an equal chance of occupying anyone's position). However, as I argued, this stipulation is not true OF the real world because each of us didn't actually have an equal chance of being in any of our position, and what we should do should be based on the facts, and not on a stipulation. In kbog's latest reply to me regarding the veil of ignorance, he seems to argue that the stipulation should determine what we ought to do (irrespective of whether it is true in the actual world) because "The reason we look at what they would agree to from behind the veil of ignorance as opposed to outside is that it ensures that they give equal consideration to everyone, which is a basic principle that appeals to us as a cornerstone of any decent moral system." I have yet to respond to this latest reply because I have been too busy arguing about our senses of "more pain", but if I were to respond, I would say this: "I agree that we should give equal consideration to everyone, which is why I believe we should give each person a chance of being helped proportional to the suffering they face. The only difference is that this is giving equal consideration to everyone in a way that respects the facts of the world." Anyways, I don't want to say too much here, because kbog might not see it and it wouldn't be fair if you only heard my side. I'll respond to kbog's reply eventually (haha) and you can follow the discussion there if you wish.

Let me just add one thing: Based on Singer's intro to Utilitarianism, Harsanyi argued that the veil of ignorance also entails a form of utilitarianism on which we ought to maximize average utility, as opposed to Rawls' claim that it entails giving priority to the worst off. If this is right, then the veil of ignorance approach doesn't support classical utilitarianism which just says we ought to maximize utility and not average utility.

One could imagine others who also disagreed with Comparability, but thought the appropriate solution was to always help the person suffering the most, and not care at all about anyone else.

Yes, they could, but I also argued that who suffers matters in my response to Objection 2, and to simply help the person suffering the most is to ignore this fact. Thus, even if one person suffering a lot is experientially worse (and thus morally worse) than many others each suffering something less, I believe we should give the others some chance of being helped. That is to say, in light of the fact that who suffers matters, I believe it is not always right to prevent the morally worse case.

To take things to the opposite extreme, someone could also deny Comparability but think that the most important thing was minimizing the number of people suffering at all and not take into account intensity whatsoever (although they would likely justify rejecting Comparability on different grounds to you).

While this is a possible position to hold, it is not a plausible one, because it effectively entails that the numbers matter in itself. That is, such a person thinks he should save the many over one other person not because he thinks the many suffering involves more pain than the one suffering (for he denies that a non-purely experientially determined amount of pain can be compared with a purely experientially determined amount of pain). Rather, he thinks he should save the many solely because they are many. But it is hard to see how numbers matter in itself.

Comment author: Alex_Barry 13 April 2018 10:09:42AM *  1 point [-]

A couple of brief points in favour of the classical approach: It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).

I'm not sure I see the advantage here, or what the alleged advantage is. I don't see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.

The argument is that if:

  • The amount of 'total pain' is determined by the maximum amount of suffering people experienced by any given person (Which I think is what you are arguing)
  • There could be an alien civilization containing a being experiencing more suffering than any human is capable of experiencing (you could also just use a human being tortured if you liked for a less extreme but clearly applicable case)
  • In this case, then the amount of 'total pain' is always at least that very large number, such that none of your actions can change it at all.
  • Thus (and you would disagree with this implication due to your adoption of the Pareto principle) since the level of 'total pain' is the morally important thing, all of your possible actions are morally equivalent.

As I mention I think you escape this basic formulation of the problem by your adoption of the Pareto principle, but a more complicated version causes the same issue:

This is essentially just applying the non-identity problem to the example above. (weirdly enough I think the best explanation I've seen of the non-identity problem is the second half of the 'the future' section of Derek Parfit wikipedia page )

The argument goes something like:

  • D1 If we adopt that 'total pain' is the maximal pain experienced by any person for whom we can effect how much pain their experience (an attempt to incorporate the Pareto principle into the definition for simplicity's sake).
  • A1 At some point in the far future there is almost certainly going to be someone experiencing extreme pain. (Even if humanity is wiped out, so most of the future has no one in it, that wiping out is likely to involve extreme pain for some).
  • A2 Due to chaotic nature of the world, and the strong dependence on birth timings of personal identity (if the circumstances of ones conception change even very slightly then your identity will almost certainly be completely different) any actions in the world now will within a few generations result in a completely different set of people existing.
  • C1 Thus by A1 the future is going to contain someone experiencing extreme pain, but by A2 exactly who this person is will vary with any different courses of action, thus by D1 the 'total pain' in all cases is uniformly vary high.

This is similar to the point made in JanBrauner, however I did not find your response to their comment particularly engaged the core point of the extreme unpredictability of the maximum pain caused by an act.

After your most recent comment I am generally unsure exactly what you are arguing for in terms of moral theories. When arguing on which form of pain is morally important you seem to make a strong case that one should consider the 'total pain' in a situation solely by whatever pain involved is most extreme. However when discussing moral recommendations you don't completely focus on this. Thus I'm not sure if this comments and its examples will miss the mark completely.

(There are also more subtle defenses, such as those relating to how much one cares about future people etc. which have thusfar been left out of the discussion).

Comment author: Jeffhe  (EA Profile) 12 April 2018 10:37:12PM *  0 points [-]

Hey Alex,

Thanks again for taking the time to read my conversation with kbog and replying. I have a few thoughts in response:

(Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as 'two people experiencing the same pain is twice as bad as one person experiencing that pain' (there is some change from discussing 'total pain' to 'badness' here, but I think it still fits with our usage).)

When you say that many people here would embrace the assumption that "two people experiencing the same pain is twice as bad as one person experiencing that pain", are you using "bad" to mean "morally bad?"

I ask because I would agree if you meant morally bad IF the single person was a subset of the two people. For example, I would agree that Amy and Susie each suffering is twice as morally bad as just Amy suffering. However, I would not agree IF the single person was not a subset of the two (e.g., if the single person was Bob). If the single person was Bob, I would think the two cases are morally just as bad.

Now, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.

However, based on my preferred sense of "more pain", two people suffering involves the same amount of pain as one person suffering, irrespective of whether the single person is a subset or not.

Therefore, you might wonder how I am able to arrive at the different opinions above. More specifically, if I think Amy and Susie each suffering involves the same amount of pain as just Amy suffering, shouldn't I be committed to saying that the former is morally just as bad as the latter, rather than twice as morally bad (which is what I want to say?)

I don't think so. I think the pareto principle provides an adequate reason for taking Amy and Susie each suffering to be morally worse than just Amy's suffering. As Otsuka (a philosopher at Harvard) puts it, the Pareto states that “One distribution of benefits over a population is strictly Pareto superior to another distribution of benefits over that same population just in case (i) at least one person is better off under the former distribution than she would be under the latter and (ii) nobody is worse off under the former than she would be under the latter." Since just Amy suffering (i.e. Susie not suffering) is Pareto superior to Amy and Susie each suffering, therefore just Amy suffering is morally better than Amy and Susie each suffering. In other words, Amy and Susie each suffering is morally worse than just Amy suffering. Notice, however, that if the single person were Bob, condition (ii) would not be satisfied because Bob would be made worse off. The Pareto principle is based on the appealing idea that we shouldn't begrudge another person an improvement that costs us nothing. Amy shouldn't begrudge Susie an improvement that costs her nothing.

Anyways, I just wanted to make that aspect of my thinking clear. So I would agree with you that more people suffering is morally worse than fewer people suffering as long as the smaller group of people is a subset of the larger group, due to the Pareto principle. But I would not agree with you that more people suffering is morally worse than fewer people suffering if those fewer people are not a subset of the larger group, since the Pareto principle is not a basis for it, nor is there more pain in the former case than the latter case on my preferred sense of "more pain". And since I think my preferred sense of "more pain" is the one that ultimately matters because it respects the fact that pain matters solely because of how it feels, I think others should agree with me.

A couple of brief points in favour of the classical approach: It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).

I'm not sure I see the advantage here, or what the alleged advantage is. I don't see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.

As discussed in other comments, it also has other pleasing properties, such as the veil of ignorance as discussed in other comments.

The veil of ignorance approach at minimum supports a policy of helping the greater number (given the stipulation that each person has an equal chance of occupying anyone's position). However, as I argued, this stipulation is not true OF the real world because each of us didn't actually have an equal chance of being in any of our position, and what we should do should be based on the facts, and not on a stipulation. In kbog's latest reply to me regarding the veil of ignorance, he seems to argue that the stipulation should determine what we ought to do (irrespective of whether it is true in the actual world) because "The reason we look at what they would agree to from behind the veil of ignorance as opposed to outside is that it ensures that they give equal consideration to everyone, which is a basic principle that appeals to us as a cornerstone of any decent moral system." I have yet to respond to this latest reply because I have been too busy arguing about our senses of "more pain", but if I were to respond, I would say this: "I agree that we should give equal consideration to everyone, which is why I believe we should give each person a chance of being helped proportional to the suffering they face. The only difference is that this is giving equal consideration to everyone in a way that respects the facts of the world." Anyways, I don't want to say too much here, because kbog might not see it and it wouldn't be fair if you only heard my side. I'll respond to kbog's reply eventually (haha) and you can follow the discussion there if you wish.

Let me just add one thing: Based on Singer's intro to Utilitarianism, Harsanyi argued that the veil of ignorance also entails a form of utilitarianism on which we ought to maximize average utility, as opposed to Rawls' claim that it entails giving priority to the worst off. If this is right, then the veil of ignorance approach doesn't support classical utilitarianism which just says we ought to maximize utility and not average utility.

One could imagine others who also disagreed with Comparability, but thought the appropriate solution was to always help the person suffering the most, and not care at all about anyone else.

Yes, they could, but I also argued that who suffers matters in my response to Objection 2, and to simply help the person suffering the most is to ignore this fact. Thus, even if one person suffering a lot is experientially worse (and thus morally worse) than many others each suffering something less, I believe we should give the others some chance of being helped. That is to say, in light of the fact that who suffers matters, I believe it is not always right to prevent the morally worse case.

To take things to the opposite extreme, someone could also deny Comparability but think that the most important thing was minimizing the number of people suffering at all and not take into account intensity whatsoever (although they would likely justify rejecting Comparability on different grounds to you).

While this is a possible position to hold, it is not a plausible one, because it effectively entails that the numbers matter in itself. That is, such a person thinks he should save the many over one other person not because he thinks the many suffering involves more pain than the one suffering (for he denies that a non-purely experientially determined amount of pain can be compared with a purely experientially determined amount of pain). Rather, he thinks he should save the many solely because they are many. But it is hard to see how numbers matter in itself.

Comment author: Alex_Barry 13 April 2018 09:03:31AM 0 points [-]

are you using "bad" to mean "morally bad?"

Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!)

I think thinking in terms of 'total pain' is not normally how this is approached, instead one thinks about converting each persons experience into 'utility' (or 'moral badness' etc.) on a personal level, but then aggregates all the different personal utilities into a global figure. I don't know if you find this formulation more intuitively acceptable (it is some sense feels like it respects your reason for caring about pain more).

I bring this up since you are approaching this from a different angle than the usual, which makes peoples standard lines of reasoning seem more complex.

A couple of brief points in favour of the classical approach: It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).

I'm not sure I see the advantage here, or what the alleged advantage is. I don't see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.

I'll discuss this in a separate comment since I think it is one of the strongest argument against your position.

I don't know much about the veil of ignorance, so I am happy to give you that it does not support total utilitarianism.

I believe it is not always right to prevent the morally worse case.

Then I am really not sure at all what you are meaning by 'morally worse' (or 'right'!). In light of this, I am now completely unsure of what you have been arguing the entire time.

Comment author: Jeffhe  (EA Profile) 10 April 2018 09:14:32PM 1 point [-]

Hey Alex,

Thanks for your reply. I can understand why you'd be extremely confused because I think I was in error to deny the intelligibility of the utilitarian sense of "more pain".

I have recently replied to kbog acknowledging this mistake, outlining how I understand the utilitarian sense of "more pain", and then presenting an argument for why my sense of "more pain" is the one that really matters.

I'd be interested to know what you think.

Comment author: Alex_Barry 12 April 2018 01:13:34PM *  1 point [-]

Thanks for getting back to me, I've read your reply to kblog, but I don't find your argument especially different to those you laid out previously (which given that I always thought you were trying to make the moral case should maybe not be surprising). Again I see why there is a distinction one could care about, but I don't find it personally compelling.

(Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as 'two people experiencing the same pain is twice as bad as one person experiencing that pain' (there is some change from discussing 'total pain' to 'badness' here, but I think it still fits with our usage).)

A couple of brief points in favour of the classical approach:

  • It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).
  • As discussed in other comments, it also has other pleasing properties, such as the veil of ignorance as discussed in other comments.

One additional thing to note is that dropping the comparability of 'non-purely experientially determined' and 'purely experientially determined' experiences (henceforth 'Comparability') does not seem to naturally lead to a specific way of evaluating different situations or weighing them against each other.

For example, you suggest in your post that without Comparability the morally correct course of action would be to give each person a chance of being helped in proportion to their suffering, but this does not necessarily follow. One could imagine others who also disagreed with Comparability, but thought the appropriate solution was to always help the person suffering the most, and not care at all about anyone else. To take things to the opposite extreme, someone could also deny Comparability but think that the most important thing was minimizing the number of people suffering at all and not take into account intensity whatsoever (although they would likely justify rejecting Comparability on different grounds to you).

Comment author: 80000_Hours 08 April 2018 06:10:28PM 1 point [-]

Hi Alex thanks I fixed 30. 2,4,6 and 15 are working for me - can you email over a screenshot of the error you're getting?

Comment author: Alex_Barry 08 April 2018 11:33:42PM 0 points [-]

Huh, weirdly they seem to all work again now, they used to take me to the same page as any non-valid URl, e.g. https://80000hours.org/not-a-real-URL/

Comment author: Alex_Barry 08 April 2018 05:03:53PM *  0 points [-]

The links to 2, 4, 6 and 15 seem broken on the 80K end, I just get 'page not found' for each.

Link 30 also does not work, but that is just because it starts with an unnecessary "effective-altruism.com/" before the youtube link.

I checked and everything else seems to work.

Comment author: Alex_Barry 08 April 2018 04:54:45PM 2 points [-]

Thanks for writing this! The interaction between donations and the reductions in personal allowance are interesting, and I would not have thought of them otherwise.

Comment author: Jan_Kulveit 07 April 2018 11:53:29PM *  2 points [-]

Thanks for the feedback.

Judging from this and some private feedback I think it would actually make sense to create some kind of database of activities, containing not only descriptions, but info like how intellectually/emotionally/knowledge demanding it is, what materials you need, what are prerequisites, the best practices... and ideally also a data about the presentations and feedbacks.

My rough estimate of time costs is 20h general team meetups, 10h syncing between the team and CZEA board, 70h individual time spent planning and preparations, 50h activity developement, 50h survey design, playing with data, writing this, etc. It guess in your case you are not counting the time cost of the people giving the talks preparing them?

Comment author: Alex_Barry 08 April 2018 01:48:42PM *  0 points [-]

Some reservations I would have about the usefulness of a database vs lots of write-ups 'in context' like these is that I think how well activities work can depend heavily on the wider structure and atmosphere of the retreat, as well as the events that have come before. I would probably be happier with a classification of 2 or 3 different types of retreat, and the activities that seem to work best in each. (However we should not let perfect be the enemy of good here, and there is probably a number of things that work well across different retreat styles).

Your time costs seem largely similar to mine then (on the things we both did), I had not anticipated the large amount of time you spent on survey design etc. I don't think my time cost would change much if I included the talk prep, since I would be surprise if it totaled >10 hours.

Comment author: Jorgen_Ljones 06 April 2018 08:55:51PM 2 points [-]

As of now it is quite low effort. We got a website that works like a donation portal providing information about GW orgs in Norwegian, general arguments for why one should give effectively and transparent information about the Effect Foundation. The main value here is the information is provided in Norwegian and that we support Norwegian payment methods. These payment methods are no or low fees so there are some savings in transaction cost by donating through us rather than directly.

In addition to the website we use Facebook to promote the organizations and effective giving, and we use the new facebook fundraising feature. Also we have a promotional video shown on national television 1-2 days a year(http://effective-altruism.com/ea/l0/eacommersials_on_national_tv_in_norway_for_free/).

We have experimented with donor events (AMF visited Oslo last year for a talk and get together at a pub afterwards) and reaching out to companies and their CSR-projects (http://effective-altruism.com/ea/1js/project_report_on_the_potential_of_norwegian/).

Comment author: Alex_Barry 07 April 2018 02:33:47PM 0 points [-]

Ah great, thanks for the response!

Comment author: Alex_Barry 07 April 2018 02:18:35PM 3 points [-]

Thanks for writing this up,

For your impact review this seems likely to have some impact on the program of future years EA: Cambridge retreats. (In particular it seems likely we will include a version of the 'Explaining Concepts' activity, which we would not have done otherwise, as well as being an additional point in favour of CFAR stuff, and another call to think carefully about the space/mood we create).

I am also interested in the breakdown of how you spend the 200h planning time since i would estimate the EA: Cam retreat (which had around 45 attendees, and typically had 2 talks on at the same time) took me <100h (probably <2 weeks FTE). Part of this is likely efficiency gains since I worked on it alone, and I expect a large factor to be I put much much less effort into the program (<10 hours seems very likely).

Comment author: Henry_Stanley 01 April 2018 04:19:18PM *  7 points [-]

How about a shameless plug for EA Work Club? 😇

This role is also listed there – http://www.eawork.club/jobs/87

Comment author: Alex_Barry 01 April 2018 05:54:01PM *  1 point [-]

Ah that looks great thanks, I had not heard about that before!

View more: Prev | Next