Comment author: MichaelPlant 17 August 2017 01:53:57PM 0 points [-]

This is sort of a meta-comment, but there's loads of important stuff here, each of which could have it's own thread. Could I suggest someone (else), organises a (small) conference to discuss some of these things?

I've got quite a few things to add on the ITN framework but nothing I can say in a few words. Relatedly, I've also been working on a method for 'cause search' - a ways of finding all the big causes in a given domain - which is the step before cause prio, but that's not something I can write out succinctly either (yet, anyway).

Comment author: Halstead 17 August 2017 04:07:01PM 0 points [-]

I think splitting off these questions would balkanise things too much, making it harder for people interested in this general question to get relevant information.

5

How should we assess very uncertain and non-testable stuff?

There is a good and widely accepted approach to assessing testable projects - roughly what GiveWell does.  It is much less clear how EA research organisations should assess projects, interventions and organisations with very uncertain non-testable impact, such as policy work or academic research. There are some disparate materials on... Read More
Comment author: MichaelPlant 15 August 2017 10:48:28AM 0 points [-]

Sorry, I don't see what your point is. Could you expand?

Comment author: Halstead 15 August 2017 01:58:48PM 1 point [-]

He's saying that the value of the global cereal market alone is $2tr, which exceeds the value of the wholesale drugs market, contra what you say in your piece.

Comment author: Carl_Shulman 30 July 2017 01:37:08AM 2 points [-]

Separately in the linked Holden blog post it seems that the comparison is made between 100 large impacts and 10,000 small impacts that are well under 1% as large. I.e. the hypothetical compares larger total and per beneficiary impacts against a smaller total benefit distributed over more beneficiaries.

That's not a good illustration for anti-aggregationism.

(2) Provide consistent, full nutrition and health care to 100 people, such that instead of growing up malnourished (leading to lower height, lower weight, lower intelligence, and other symptoms) they spend their lives relatively healthy. (For simplicity, though not accuracy, assume this doesn’t affect their actual lifespan – they still live about 40 years.)

This sounds like improving health significantly, e.g. 10% or more, over 14,600 days each, or 1.46 million days total. Call it 146,000 disability-adjusted life-days.

(3) Prevent one case of relatively mild non-fatal malaria (say, a fever that lasts a few days) for each of 10,000 people, without having a significant impact on the rest of their lives.

Let's say mild non fatal malaria costs half of a life-day per day, and 'a few days' is 6 days. Then the stakes for these 10,000 people are 30,000 disability-adjusted life-days.

146,000 adjusted life days is a lot more than 30,000 adjusted life-days.

Comment author: Halstead 30 July 2017 10:50:54AM 2 points [-]

This is true. Still, for many people, intuitions against aggregation seem to stand up even if the number of people with mild ailments increases without limit (millions, billions, and beyond). For some empirical evidence, see http://eprints.lse.ac.uk/55883/1/__lse.ac.uk_storage_LIBRARY_Secondary_libfile_shared_repository_Content_Voorhoeve,%20A_How%20should%20we%20aggregate_Voorhoeve_How%20should%20we%20aggregate_2014.pdf

Comment author: Halstead 28 July 2017 02:12:42PM *  4 points [-]

I don't agree with the response suggested (recognising that it cites an article I co-authored). The DALY and QALY metrics imply the ARC. It seems reasonable that these metrics or ones similar are in some sense definitive of EA in global poverty and health.

Then the question is whether it is correct to aggregate small benefits. It's fair to say there is philosophical disagreement about this, but nevertheless (in my view) a strong case to be made that the fully aggregative view is correct. One way to approach this, probably the dominant way in moral philosophy, is to figure out the implications of philosophical views and then to choose between the various counterintuitive implications these have. e.g. you could say that the badness of minor ailments does not aggregate. Then you choose between the counterintuitive implications of this vs the aggregative view. This seems to be a bad way to go about it because it starts at the wrong level.

What we should do is assess at the level of rationales. The aggregative view has a rationale, viz (crudely) more of a good thing is better, Clearly, it's better to cure lots of mild ailments that it is to cure one. The goodness of doing so does not diminish: curing one additional person is always as valuable no matter how many other people you have cured. If so, it follows that curing enough mild ailments must eventually be better than curing one really bad ailment. A response to this needs to criticise this rationale not merely point out that it has a weird seeming implication. Lots of things have weird seeming implications, including e.g. quantum physics, evolution. Pointing out that quantum physics has counterintuitive implications should not be the primary level at which we debate the truth of quantum physics.

See this - http://spot.colorado.edu/~norcross/Comparingharms.pdf

Comment author: MichaelPlant 02 June 2017 12:22:43PM 4 points [-]

I'm not sure I agree. There's an argument that gossip is potentially useful. Here's a quote from this paper:

Gossip also has implications for the overall functioning of the group in which individuals are embedded. For example, despite its harmful consequences for individuals, negative gossip might have beneficial consequences for group outcomes. Empirical studies have shown that negative gossip is used to socially control and sanction uncooperative behavior within groups (De Pinninck et al., 2008; Elias and Scotson, 1965 ; Merry, 1984). Individuals often cooperate and comply with group norms simply because they fear reputation-damaging gossip and subsequent ostracism

Comment author: Halstead 02 June 2017 12:37:46PM 2 points [-]

I can't access the linked to studies. Even if true, this only justifies talking behind people's backs as a sanction for uncooperative behaviour. And I suspect that there are much better ways to sanction uncooperative behaviour.

Comment author: Halstead 01 June 2017 05:46:54PM 3 points [-]

Brief note: one important norm of considerateness which it is easy to neglect is not talking about people behind their back. I think there are strong consequentialist reasons not to do this: it makes you feel bad, it's hard to remain authentic when you next see that person, it makes others think a lot less of you

Comment author: MichaelPlant 15 May 2017 12:24:50AM 0 points [-]

Peter, do you have any figures Give Directly? Also, what is the measure of cost-effectiveness you're thinking of? Here's GiveWell's spreadsheet which, AFAICT, is in terms of "cost per life saved equivalent" which I'm not sure how to compare to DALYs or anything else (in fact, even after some searching, I'm still not sure what "cost per life saved equivalent" even is).

Comment author: Halstead 16 May 2017 02:05:30PM *  3 points [-]

Michael, the definition is here - https://docs.google.com/spreadsheets/d/1KiWfiAGX_QZhRbC9xkzf3I8IqsXC5kkr-nwY_feVlcM/edit#gid=1034883018

On the results tab, if you hover over the "cost per life saved equivalent" box, it says "A life saved equivalent is based on the "DALYs per death of a young child averted" input each individual uses. What a life saved equivalent represents will therefore vary from person to person. "

I agree this is too hard to find and it would be good if this were fixed. I'd also like to see the assumptions made about this figure more clearly spelled out

Comment author: George_H 05 May 2017 08:43:30PM 2 points [-]

[comment 2/2 on GD's uncaptured effectiveness value via systemic influence - separating comments as they raise related but essentially distinct issues)

UBI: GD’s universal basic income experiment is currently the world’s largest. While nobody can really know what effects a nationally implemented a UBI would have, it could be an incredibly effective tool for reducing inequality, unlocking human flourishing, etc. It’s easy to discount the value of the messy and unmeasurable, but if GD’s work hastens the route to nations considering the idea seriously then this could comfortably trump all its other effectiveness benefits (and speaking of Trump - perhaps UBI adoption would reduce the economic fears that nationalist demagogues can exploit, leading to huge positive impacts in everything from aid and trade policy to X-risk concerns). Systemic change is the only way to achieve real global progress, and promoting UBI is a plausibly good bet in such a highly unpredictable sphere. Donating to GD may be the best buy for those who wish to see the idea tested properly.

I can summarise all of this (including my previous comment) into saying that GD may well be a lot more effective than the quantifiables suggest. In other words, I think GD’s potential for systemic influence could well far outweigh the deficit in provable cost-effectiveness that they have vs some of the other top charities. That being said, this is a very uncertain area and I also donate elsewhere.

So I’m in general agreement with your points on discounting anti-paternalism, and am also aware that I may have picked up a slight pro-GD bias as a result of doing a load of CEA corporate outreach work with them recently. But you did mention that some of your conclusions may not hold if we were to relax the assumption that GiveWell’s cost-effectiveness estimates are accurate. While the points I’ve raised around uncaptured value are (quite rightly) not GiveWell’s territory, do they persuade you to relax this assumption somewhat? And would this influence where you might donate?

Should also add that it’s great to see you highlight that our other top charities are also not paternalistic - more noise should be made about this as a lot of people care. More broadly then I’d also love to hear what uncaptured effectiveness impacts our other top charities might be having, as I’m not really comparing like with like in a post such as this. In fact a discussion of uncaptured value probably deserves a full post of its own, led by someone with more evaluation expertise than me!

Comment author: Halstead 06 May 2017 10:29:28AM 1 point [-]

Hey thanks for this. I think your case for GD is really compelling and people need to bear it in mind.

I wouldn't say that we should discount anti-paternalism. My point is really to figure out what follows from anti-paternalism, conceived as an intrinsically desirable goal.

For the reasons you give and for some of those discussed with Ben Hoffman, there might well be instrumental reasons to have a perhaps weak presumption against more paternalistic interventions. This is a difficult question, and one I don't have a particularly firm view on.

Comment author: BenHoffman 05 May 2017 08:33:59PM *  1 point [-]

It sounds like we might be coming close to agreement. The main thing I think is important here, is taking seriously the notion that paternalism is evidence about the other things we care about, and thus an important instrumental proxy goal, not just something we have intrinsic preferences about. More generally the thing I'm pushing back against is treating every moral consideration as though it were purely an intrinsic value to be weighed against other intrinsic values.

I see people with a broadly utilitarian outlook doing this a lot, perhaps because people from other moral perspectives don't have a lot of practice grounding their moral intuitions in a way that is persuasive to utilitarians. Autonomy in particular is something where we need to distinguish purely intrinsic considerations (e.g. factory farmed animals are unhappy because they have little physical autonomy) from instrumental pragmatic considerations (e.g. interventions that give poor people more autonomy preserve information by letting them use local knowledge that we do not have, while paternalistic interventions overwrite local information).

Thus, we should think about requiring higher impact for paternalism interventions as building in a margin for error, not just outweighing the anti-paternalism intuition. If a paternalistic intervention has strong evidence of a large benefit, it makes sense to describe it as overcoming the paternalism objection, but not rebutting it - we should still be skeptical relative to a nonpaternalistic intervention with the same evidence, it's just that sometimes we should intervene anyway.

Comment author: Halstead 06 May 2017 10:21:55AM *  0 points [-]

Yes I'm not sure I disagree with much of what you have said.

I don't want my argument to be taken to show that we should ignore paternalism as a potentially important instrumental factor. Showing the implicaitons of paternalism as a non-instrumentally important goal does not show anything about the instrumental importance of paternalism. Paternalism might not count in favour of GD as an non-instrumental goal, but count in favour of it as an instrumental goal.

It's important to separate these two types of concern. I do think some people would have the non-instrumental justification in mind, so it's important to get clear on that.

View more: Next