Comment author: MichaelPlant 31 March 2017 11:11:59AM 3 points [-]

Thanks for this Michelle. I don't think I've quite worked out how to present what I mean, which is probably why it isn't clear.

To try again, what I'm alluding to are argumentative scenarios where X and Y are disagreeing, and it's apparent to both of them that X know what view he/she hold, what its weird implications are and X still accepts the view as being, on balance, right.

Intuition jousting is where Y then says things like "but that's nuts!" Note Y isn't providing an argument now. It's a purely rhetorical move that uses social pressure ("I don't want people to think I'm nuts") to try and win the argument. I don't think conversations are very interesting at this stage or useful. Note also that X is able to turn this around on Y to say "but your view has different weird implications of its own, and that's more nuts!" It's like a joust because the two people are just testing who's able to hold on to their view under the pressure from the other.

I suppose Y could counter-counter attack X and say "yeah, but more people who have thought about this deeply agree with me". It's not clear what logical (rather than rhetorical) force this adds. It seems like 'deeply' would, in any case, being doing most of the work in that scenario.

I'm somewhat unsure how to think about moral truth here. However, if you do think this is one moral truth to be found, I would think you would really want to understand people who disagree with you in case you might be wrong. As a practical matter, this speaks strongly in favour of engaging in considerate, polite and charitable disagreement ("intuition exchanging") rather than intuition jousting anyway. From my anecdata, there is both types in the EA community and it's only the jousting variety I object to.

Comment author: Halstead 31 March 2017 12:23:47PM 1 point [-]

Appealing to rhetoric in this way is, I agree, unjustifiable. But I thought there might be a valid point that tacked a bit closer to the spirit of your original post. There is no agreed methodology in moral philosophy, which I think explains a lot of persisting moral disagreement. People eventually start just trading which intuitions they think are the most plausible - "I'm happy to accept the repugnant conclusion, not the sadistic one" etc. But intuitions are ten a penny so this doesn't really take us very far - smart people have summoned intuitions against the analytical truth that betterness is transitive.

What we really need is an account of which moral intuitions ought to be held on to and which ones we should get rid of. One might appeal to cognitive biases, to selective evolutionary debunking arguments, and so on. e.g...

  1. One might resist prioritarianism by noting that people seemlessly shift from accepting that resources have diminishing marginal utility to accepting that utility has diminishing marginal utility. People have intuitions about diminishing utility with respect to that same utility, which makes no sense - http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.174.5213&rep=rep1&type=pdf.

  2. Debunk an anti-aggregative view by appealing to people's failure to grasp large numbers.

  3. Debunk an anti-incest norm by noting that it is explained by evolutionary selective pressure rather than apprehension of independent normative truth.

You might want to look at Huemer's stuff on intuitionism. - https://www.cambridge.org/core/journals/social-philosophy-and-policy/article/revisionary-intuitionism/EE5C8F3B9F457168029C7169BA1D62AD

Comment author: weeatquince  (EA Profile) 30 March 2017 09:19:03AM 1 point [-]

This is a good paper and well done to the authors.

I think section 3 is very weak. I am not flagging this as a flaw in the argument just the area that I see the most room for improvement in the paper and/or the most need for follow up research. The authors do say that more research is needed which is good.

Some examples of what I mean by the argument is weak: - The paper says it is "reasonable to believe that AMF does very well on prioritarian, egalitarian, and sufficientarian criteria". "reasonable to believe" is not a strong claim. No one has made any concerted effort to map the values of people who are not utilitarians, to come up with metrics that may represent what such people care about and evaluate charities on these metrics. This could be done but is not happening. - The paper says Iason "fail[s] to show that effective altruist recommendations actually do rely on utilitarianism" but the paper also fail to show that effective altruist recommendations actually do not rely on utilitarianism. - Etc

Why I think more research is useful here: - Because when the strongest case you can make for EA to people with equality as a moral intuition begins by saying "it is reasonable to believe . . . " it is so hard to make EA useful to such people. For example when I meet people new to EA who care a lot about equality making the case that: 'if you care about minimising suffering this 'AMF' thing comes up top and it is reasonable to assume that if you care about equality it also could be at the top because it is effective and helps the poorest' carries a lot less weight than perhaps saying: 'hey we funded a bunch of people who care foremost about equality, like you do, to map out their values and rank charities and this came top.'

Note cross-posting a summarised comment on this paper from a discussion on Facebook https://www.facebook.com/groups/798404410293244/permalink/1021820764618273/?comment_id=1022125664587783

Comment author: Halstead 30 March 2017 10:47:21AM *  1 point [-]

Hi,

  1. We reference a number of lines of evidence suggesting that donating to AMF does well on sufficientarian, prioritarian, egalitarian criteria. See footnotes 23 and 24. Thus, we provide evidence for our conclusion that 'it is reasonable to believe that AMF does well on these criteria'. This, of course, is epistemically weaker than claims such as 'it is certain that AMF ought to be recommended by prioritarians, egalitarians and sufficientarians'. You seem to suggest that concluding with a weak epistemic claim is inherently problematic, but that can't be right. Surely, if the evidence provided only justifies a weak epistemic claim, making a weak epistemic claim is entirely appropriate.

  2. You seem to criticise us for the movement having not yet provided a comprehensive algorithm mapping values on to actions. But arguing that the movement is failing is very different to arguing that the paper fails in its own terms. It is not as though we frame the paper as: "here is a comprehensive account of where you ought to give if you are an egalitarian or a prioritarian". As you say, more research is needed, but we already say this in the paper.

  3. Showing that 'Gabriel fails to show that EA recommendations rely on utiltiarianism' is a different task to showing that 'EA recommendations do not rely on utilitarianism'. Showing that an argument for a proposition P fails is different to showing that not-P.

Comment author: Cornelius  (EA Profile) 26 March 2017 07:02:49PM *  0 points [-]

Put this way I change my mind and agree it is unclear. However, to make your paper stronger, I would have included something akin to what you just wrote in the paper to make it clear why you think Gabriel's use of "iteration effects" is unclear and not the same as his usage in the 'priority' section.

I'm not sure how important clarifying something like this is for philosophical argumentation, but for me, this was the one nagging kink in what is otherwise fast becoming one of my favourite "EA-defense" papers.

Comment author: Halstead 26 March 2017 07:22:24PM 0 points [-]

Thanks for the feedback. From memory, I think at the time we thought that since it didn't do any work in his argument, we didn't think that could be what he meant by it.

Comment author: Cornelius  (EA Profile) 26 March 2017 06:14:25AM *  0 points [-]

I notice this in your paper:

He also mentions that cost-effectiveness analysis ignores the significance of ‘iteration effects’ (page 12)

Gabriel uses Iterate in his Ultra-poverty example so I'm fairly certain how he uses iterate here is what he was trying to refer to

Therefore, they would choose the program that supports literate men. When this pattern of reasoning is iterated many times, it leads to the systematic neglect of those at the very bottom, a trend exemplified by how EAs systematically neglect focusing on the very bottom in the first world. This is unjust (with my edits)

So it's the same with using the DALY to assess cost-effectiveness. He is concerned that if you scale up or replicate a program that is cost-effective due to DALY calculations that it would ignore iteration effects where a subset of those receiving the treatment might systematically be neglected - and that this goes against principles of justice and equality. Therefore using cost-effectiveness as a means of deciding what is good or what charity to fund is on morally shaky ground (according to Gabriel). This is how I understood him.

Comment author: Halstead 26 March 2017 11:18:12AM *  2 points [-]

Thanks for this. I have two comments. Firstly, I'm not sure he's making a point about justice and equality in the 'quantification bias' section. If his criticism of DALYs works, then it works on straightforward consequentialist grounds - DALYs are the wrong metric of welfare. (On this, see our footnote 41.)

Secondly, the claim about iteration effects is neither necessary nor sufficient to get to his conclusion. If the DALY metric inappropriately ignores hope, then it doesn't really matter whether a decision about healthcare resource distribution on the basis of DALYs is made once or is iterated. Either way, DALYs would ignore an important component of welfare.

Comment author: Evan_Gaensbauer 24 March 2017 09:22:31PM 4 points [-]

Upvoted. Maybe this is just what's typical for academic style, but when you address Gabriel it seems you're attributing the points of view presented to him, while when I read the original paper I got the impression he was collating and clarifying some criticisms of effective altruism so they'd be coherent enough they'd affect change. One thing about mainstream journalism is it's usually written for a specific type of audience the editor has in mind, even if the source claims to be seeking a general audience, and so they spin things a certain way. While criticisms of EA definitely aren't what I'd call sensationalistic, they're written in a style of a rhetorical list of likes and dislikes about the EA movement. It's taken for granted the implied position of the author is somehow a better alternative than what EA is currently doing, as if no explanation is needed.

Gabriel fixes this by writing the criticisms of EA up in a way that we'd understand what about the movement would need to change to satisfy critics, if we were indeed to agree with critics. Really, except for the pieces published on the Boston Review, I feel like other criticisms of EA were written not for EA at all, but rather a review of EA for other do-gooders as a warning to stay away from the movement. It's not the job of critics to solve all our problems for us, but being a movement that is at least willing to try to change in the face of criticism, it's frustrating nobody takes us up on the opportunity given what blindspots we may have and tries to be constructive.

Comment author: Halstead 25 March 2017 09:05:10AM 1 point [-]

Thanks for the comment! We do go to some length to make clear that we're unsure whether Gabriel himself endorses the objections. We're pretty sure he endorses some (systemic change, counterfactuals), but less sure about the others.

Comment author: Halstead 20 March 2017 04:41:47PM *  5 points [-]

Very insightful post. One note - I think reversibility considerations seem to count against some but not all political causes. Most obviously, they seem to count against taking sides on hot controversial political issues, such as EU membership or support for some specific political party. However, they don't count against low heat political issues. Some important ones include: more evidence-based policy, improving forecasting in government, changing metrics in health resource prioritisation etc.

Comment author: MichaelDello 14 March 2017 11:15:39AM 2 points [-]

Just to add to this, in my anecdotal experience, it seems like the most common argument amongst EAs for not focusing on X-risk or the far future is risk aversion.

Comment author: Halstead 16 March 2017 02:57:38PM 1 point [-]

Thanks for this. It'd be interesting if there were survey evidence on this. Some anecdotal stuff the other way... On the EA funds page, Beckstead mentions person-affecting views as one of the reasons that one might not go into far future causes (https://app.effectivealtruism.org/funds/far-future). Some Givewell staffers apparently endorse person-affecting views and avoid the far future stuff on that basis - http://blog.givewell.org/2016/03/10/march-2016-open-thread/#comment-939058.

Comment author: MichaelDello 14 March 2017 11:18:38AM 0 points [-]

Thanks for this John. I agree that even if you use some form of classical utilitarianism, the future might still plausibly be net negative in value. As far as I can tell, Bostrom and co don't consider this possibility when they argue the value of existential risk research, which I think is a mistake. They mostly talk about the expected number of human lives in the future if we don't succumb to X-risk, assuming they are all (or mostly) positive.

Comment author: Halstead 16 March 2017 02:50:40PM 0 points [-]

Thanks for your comment. I agree with the Michael Plant's response below. I am not saying that there will be a preponderance of suffering over pleasure in the future. I am saying that if you ignore all future pleasure and only take account of future suffering, then the future is astronomically bad.

Comment author: Daniel_Eth 10 March 2017 10:33:51AM *  6 points [-]

"Adding future possible people with positive welfare does not make the world better."

I find that claim ridiculous. How could giving the gift of a joyful life have zero value?

Comment author: Halstead 10 March 2017 11:17:23AM 8 points [-]

Yes I agree, but many people apparently do not.

8

The asymmetry and the far future

TL;DR: One way to justify support for causes which mainly promise near-term but not far future benefits, such as global development and animal welfare, is the ‘intuition of neutrality’: adding possible future people with positive welfare does not add value to the world. Most people who endorse claims like this... Read More

View more: Prev | Next