Comment author: Cornelius  (EA Profile) 26 March 2017 07:02:49PM *  0 points [-]

Put this way I change my mind and agree it is unclear. However, to make your paper stronger, I would have included something akin to what you just wrote in the paper to make it clear why you think Gabriel's use of "iteration effects" is unclear and not the same as his usage in the 'priority' section.

I'm not sure how important clarifying something like this is for philosophical argumentation, but for me, this was the one nagging kink in what is otherwise fast becoming one of my favourite "EA-defense" papers.

Comment author: Halstead 26 March 2017 07:22:24PM 0 points [-]

Thanks for the feedback. From memory, I think at the time we thought that since it didn't do any work in his argument, we didn't think that could be what he meant by it.

Comment author: Cornelius  (EA Profile) 26 March 2017 06:14:25AM *  0 points [-]

I notice this in your paper:

He also mentions that cost-effectiveness analysis ignores the significance of ‘iteration effects’ (page 12)

Gabriel uses Iterate in his Ultra-poverty example so I'm fairly certain how he uses iterate here is what he was trying to refer to

Therefore, they would choose the program that supports literate men. When this pattern of reasoning is iterated many times, it leads to the systematic neglect of those at the very bottom, a trend exemplified by how EAs systematically neglect focusing on the very bottom in the first world. This is unjust (with my edits)

So it's the same with using the DALY to assess cost-effectiveness. He is concerned that if you scale up or replicate a program that is cost-effective due to DALY calculations that it would ignore iteration effects where a subset of those receiving the treatment might systematically be neglected - and that this goes against principles of justice and equality. Therefore using cost-effectiveness as a means of deciding what is good or what charity to fund is on morally shaky ground (according to Gabriel). This is how I understood him.

Comment author: Halstead 26 March 2017 11:18:12AM *  2 points [-]

Thanks for this. I have two comments. Firstly, I'm not sure he's making a point about justice and equality in the 'quantification bias' section. If his criticism of DALYs works, then it works on straightforward consequentialist grounds - DALYs are the wrong metric of welfare. (On this, see our footnote 41.)

Secondly, the claim about iteration effects is neither necessary nor sufficient to get to his conclusion. If the DALY metric inappropriately ignores hope, then it doesn't really matter whether a decision about healthcare resource distribution on the basis of DALYs is made once or is iterated. Either way, DALYs would ignore an important component of welfare.

Comment author: Evan_Gaensbauer 24 March 2017 09:22:31PM 3 points [-]

Upvoted. Maybe this is just what's typical for academic style, but when you address Gabriel it seems you're attributing the points of view presented to him, while when I read the original paper I got the impression he was collating and clarifying some criticisms of effective altruism so they'd be coherent enough they'd affect change. One thing about mainstream journalism is it's usually written for a specific type of audience the editor has in mind, even if the source claims to be seeking a general audience, and so they spin things a certain way. While criticisms of EA definitely aren't what I'd call sensationalistic, they're written in a style of a rhetorical list of likes and dislikes about the EA movement. It's taken for granted the implied position of the author is somehow a better alternative than what EA is currently doing, as if no explanation is needed.

Gabriel fixes this by writing the criticisms of EA up in a way that we'd understand what about the movement would need to change to satisfy critics, if we were indeed to agree with critics. Really, except for the pieces published on the Boston Review, I feel like other criticisms of EA were written not for EA at all, but rather a review of EA for other do-gooders as a warning to stay away from the movement. It's not the job of critics to solve all our problems for us, but being a movement that is at least willing to try to change in the face of criticism, it's frustrating nobody takes us up on the opportunity given what blindspots we may have and tries to be constructive.

Comment author: Halstead 25 March 2017 09:05:10AM 1 point [-]

Thanks for the comment! We do go to some length to make clear that we're unsure whether Gabriel himself endorses the objections. We're pretty sure he endorses some (systemic change, counterfactuals), but less sure about the others.

Comment author: Halstead 20 March 2017 04:41:47PM *  4 points [-]

Very insightful post. One note - I think reversibility considerations seem to count against some but not all political causes. Most obviously, they seem to count against taking sides on hot controversial political issues, such as EU membership or support for some specific political party. However, they don't count against low heat political issues. Some important ones include: more evidence-based policy, improving forecasting in government, changing metrics in health resource prioritisation etc.

Comment author: MichaelDello 14 March 2017 11:15:39AM 2 points [-]

Just to add to this, in my anecdotal experience, it seems like the most common argument amongst EAs for not focusing on X-risk or the far future is risk aversion.

Comment author: Halstead 16 March 2017 02:57:38PM 1 point [-]

Thanks for this. It'd be interesting if there were survey evidence on this. Some anecdotal stuff the other way... On the EA funds page, Beckstead mentions person-affecting views as one of the reasons that one might not go into far future causes (https://app.effectivealtruism.org/funds/far-future). Some Givewell staffers apparently endorse person-affecting views and avoid the far future stuff on that basis - http://blog.givewell.org/2016/03/10/march-2016-open-thread/#comment-939058.

Comment author: MichaelDello 14 March 2017 11:18:38AM 0 points [-]

Thanks for this John. I agree that even if you use some form of classical utilitarianism, the future might still plausibly be net negative in value. As far as I can tell, Bostrom and co don't consider this possibility when they argue the value of existential risk research, which I think is a mistake. They mostly talk about the expected number of human lives in the future if we don't succumb to X-risk, assuming they are all (or mostly) positive.

Comment author: Halstead 16 March 2017 02:50:40PM 0 points [-]

Thanks for your comment. I agree with the Michael Plant's response below. I am not saying that there will be a preponderance of suffering over pleasure in the future. I am saying that if you ignore all future pleasure and only take account of future suffering, then the future is astronomically bad.

Comment author: Daniel_Eth 10 March 2017 10:33:51AM *  6 points [-]

"Adding future possible people with positive welfare does not make the world better."

I find that claim ridiculous. How could giving the gift of a joyful life have zero value?

Comment author: Halstead 10 March 2017 11:17:23AM 7 points [-]

Yes I agree, but many people apparently do not.

8

The asymmetry and the far future

TL;DR: One way to justify support for causes which mainly promise near-term but not far future benefits, such as global development and animal welfare, is the ‘intuition of neutrality’: adding possible future people with positive welfare does not add value to the world. Most people who endorse claims like this... Read More
Comment author: Halstead 19 January 2017 04:40:55PM *  11 points [-]

I agree with the general point made here and probably many of the examples. However, I think we need to be aware of a tendency I have noticed in the utiltiarian decision procedure literature to over-emphasise the extent of the overlap between utilitarianism and common sense morality. The obvious practical motivation for this is to make utiltiarianism seem more intuitively palatable, both for others and ourselves. However, this incentive exists, even if it is unjustified by an ultimate utilitarian rationale. It seems like what quick and dirty heuristics or other decision procedures we ought to use is a really difficult empirical question, and that we shouldn't settle too quickly for what aligns with common sense morality, given the obvious aforementioned bias in play.

E.g. I've seen it said a number of times (including by e.g. Parfit I think) that utiltiarianism as applied to decision procedures justifies significant parental partiality, without much in the way of argument. I find this definitely non-obvious, and can see persuasive arguments in the other direction, even at the level of decision procedures.

Something to be aware of, but great post!