Comment author: MichaelPlant 02 June 2017 12:22:43PM 3 points [-]

I'm not sure I agree. There's an argument that gossip is potentially useful. Here's a quote from this paper:

Gossip also has implications for the overall functioning of the group in which individuals are embedded. For example, despite its harmful consequences for individuals, negative gossip might have beneficial consequences for group outcomes. Empirical studies have shown that negative gossip is used to socially control and sanction uncooperative behavior within groups (De Pinninck et al., 2008; Elias and Scotson, 1965 ; Merry, 1984). Individuals often cooperate and comply with group norms simply because they fear reputation-damaging gossip and subsequent ostracism

Comment author: Halstead 02 June 2017 12:37:46PM 2 points [-]

I can't access the linked to studies. Even if true, this only justifies talking behind people's backs as a sanction for uncooperative behaviour. And I suspect that there are much better ways to sanction uncooperative behaviour.

Comment author: Halstead 01 June 2017 05:46:54PM 3 points [-]

Brief note: one important norm of considerateness which it is easy to neglect is not talking about people behind their back. I think there are strong consequentialist reasons not to do this: it makes you feel bad, it's hard to remain authentic when you next see that person, it makes others think a lot less of you

Comment author: MichaelPlant 15 May 2017 12:24:50AM 0 points [-]

Peter, do you have any figures Give Directly? Also, what is the measure of cost-effectiveness you're thinking of? Here's GiveWell's spreadsheet which, AFAICT, is in terms of "cost per life saved equivalent" which I'm not sure how to compare to DALYs or anything else (in fact, even after some searching, I'm still not sure what "cost per life saved equivalent" even is).

Comment author: Halstead 16 May 2017 02:05:30PM *  3 points [-]

Michael, the definition is here - https://docs.google.com/spreadsheets/d/1KiWfiAGX_QZhRbC9xkzf3I8IqsXC5kkr-nwY_feVlcM/edit#gid=1034883018

On the results tab, if you hover over the "cost per life saved equivalent" box, it says "A life saved equivalent is based on the "DALYs per death of a young child averted" input each individual uses. What a life saved equivalent represents will therefore vary from person to person. "

I agree this is too hard to find and it would be good if this were fixed. I'd also like to see the assumptions made about this figure more clearly spelled out

Comment author: George_H 05 May 2017 08:43:30PM 2 points [-]

[comment 2/2 on GD's uncaptured effectiveness value via systemic influence - separating comments as they raise related but essentially distinct issues)

UBI: GD’s universal basic income experiment is currently the world’s largest. While nobody can really know what effects a nationally implemented a UBI would have, it could be an incredibly effective tool for reducing inequality, unlocking human flourishing, etc. It’s easy to discount the value of the messy and unmeasurable, but if GD’s work hastens the route to nations considering the idea seriously then this could comfortably trump all its other effectiveness benefits (and speaking of Trump - perhaps UBI adoption would reduce the economic fears that nationalist demagogues can exploit, leading to huge positive impacts in everything from aid and trade policy to X-risk concerns). Systemic change is the only way to achieve real global progress, and promoting UBI is a plausibly good bet in such a highly unpredictable sphere. Donating to GD may be the best buy for those who wish to see the idea tested properly.

I can summarise all of this (including my previous comment) into saying that GD may well be a lot more effective than the quantifiables suggest. In other words, I think GD’s potential for systemic influence could well far outweigh the deficit in provable cost-effectiveness that they have vs some of the other top charities. That being said, this is a very uncertain area and I also donate elsewhere.

So I’m in general agreement with your points on discounting anti-paternalism, and am also aware that I may have picked up a slight pro-GD bias as a result of doing a load of CEA corporate outreach work with them recently. But you did mention that some of your conclusions may not hold if we were to relax the assumption that GiveWell’s cost-effectiveness estimates are accurate. While the points I’ve raised around uncaptured value are (quite rightly) not GiveWell’s territory, do they persuade you to relax this assumption somewhat? And would this influence where you might donate?

Should also add that it’s great to see you highlight that our other top charities are also not paternalistic - more noise should be made about this as a lot of people care. More broadly then I’d also love to hear what uncaptured effectiveness impacts our other top charities might be having, as I’m not really comparing like with like in a post such as this. In fact a discussion of uncaptured value probably deserves a full post of its own, led by someone with more evaluation expertise than me!

Comment author: Halstead 06 May 2017 10:29:28AM 1 point [-]

Hey thanks for this. I think your case for GD is really compelling and people need to bear it in mind.

I wouldn't say that we should discount anti-paternalism. My point is really to figure out what follows from anti-paternalism, conceived as an intrinsically desirable goal.

For the reasons you give and for some of those discussed with Ben Hoffman, there might well be instrumental reasons to have a perhaps weak presumption against more paternalistic interventions. This is a difficult question, and one I don't have a particularly firm view on.

Comment author: BenHoffman 05 May 2017 08:33:59PM *  1 point [-]

It sounds like we might be coming close to agreement. The main thing I think is important here, is taking seriously the notion that paternalism is evidence about the other things we care about, and thus an important instrumental proxy goal, not just something we have intrinsic preferences about. More generally the thing I'm pushing back against is treating every moral consideration as though it were purely an intrinsic value to be weighed against other intrinsic values.

I see people with a broadly utilitarian outlook doing this a lot, perhaps because people from other moral perspectives don't have a lot of practice grounding their moral intuitions in a way that is persuasive to utilitarians. Autonomy in particular is something where we need to distinguish purely intrinsic considerations (e.g. factory farmed animals are unhappy because they have little physical autonomy) from instrumental pragmatic considerations (e.g. interventions that give poor people more autonomy preserve information by letting them use local knowledge that we do not have, while paternalistic interventions overwrite local information).

Thus, we should think about requiring higher impact for paternalism interventions as building in a margin for error, not just outweighing the anti-paternalism intuition. If a paternalistic intervention has strong evidence of a large benefit, it makes sense to describe it as overcoming the paternalism objection, but not rebutting it - we should still be skeptical relative to a nonpaternalistic intervention with the same evidence, it's just that sometimes we should intervene anyway.

Comment author: Halstead 06 May 2017 10:21:55AM *  0 points [-]

Yes I'm not sure I disagree with much of what you have said.

I don't want my argument to be taken to show that we should ignore paternalism as a potentially important instrumental factor. Showing the implicaitons of paternalism as a non-instrumentally important goal does not show anything about the instrumental importance of paternalism. Paternalism might not count in favour of GD as an non-instrumental goal, but count in favour of it as an instrumental goal.

It's important to separate these two types of concern. I do think some people would have the non-instrumental justification in mind, so it's important to get clear on that.

Comment author: BenHoffman 05 May 2017 03:42:53PM 1 point [-]

You're assuming the premise here a bit - that the data collected don't leave out important negative outcomes. In the particular cases you mentioned (tobacco taxes, mandatory seatbelt legislation, smallpox eradication, ORT, micronutrient foritification) my sense is that in most cases the benefits have been very strong, strong enough to outweigh a skeptical prior on paternalist interventions. But that doesn't show that we shouldn't have the skeptical prior in the first place. Seeing Like A State shows some failures, we should think of those too.

Comment author: Halstead 05 May 2017 05:08:02PM 1 point [-]

I think I agree with maybe having a sceptical prior for paternalistic interventions, but I'm unsure about how strong such a prior would be. The information on what has worked in the past would determine the prior I should have when assessing a new intervention. If I looked at all past public health interventions and paternalism was not correlated at all with quality of outcome, even correcting for reasonable unknown side-effects, then it seems like I should give paternalism very little weight when assessing a new intervention. My examples were a bit cherry-picked, but they do show that if you look at the tail of the distribution of interventions in terms of impact, they tend to be paternalistic.

However, I suspect there is something of a correlation between paternalism and outcomes: I suspect nearly all or all of the ineffectual/harmful interventions have been paternalistic - playpump etc. This is borne out by the fact that GD is better than most other anti-poverty interventions. Then you have to take in the risk of hidden costs/harms, as you say.

But there are also factors pushing the other way - e.g. biases about spending on personal health, positive externalities etc - that counterbalance a presumption against paternalism.

Comment author: BenHoffman 05 May 2017 09:08:45AM 5 points [-]

Consider paternalism as a proxy for model error rather than an intrinsic dispreference. We should wonder whether maybe the things we do to people are more likely to cause hidden harm or lack supposed benefits, than things they do for examples.

Deworming is an especially stark example. The mass drug administration program is to go to schools and force all the children, whether sick or healthy, to swallow giant poisonous pills that give them bellyaches, because we hope killing the worms in this way buys big life outcome improvements. GiveWell estimates the effect at about 1.5% of what studies say, but EV is still high. This could involve a lot of unnecessary harm too via unnecessary treatments.

By contrast, the less paternalistic Living Goods (a recent GiveWell "standout charity") sells deworming pills (at or near cost) so we should expect better targeting to kids sick with worms, and repeat business is more likely if the pills seem helpful.

I wrote a bit about this here: http://benjaminrosshoffman.com/effective-altruism-not-no-brainer/

Comment author: Halstead 05 May 2017 12:05:26PM *  2 points [-]

I agree that there might be instrumental concerns about paternalistic interventions, especially where we have limited information about how recipients act. However, these concerns do not always seem to be decisive about the effectiveness of interventions in terms of producing welfare. e.g. mandatory childhood vaccination is highly cost-effective notwithstanding its paternalism; same goes for tobacco taxes, mandatory seatbelt legislation, etc. When you look back at the most successful public health interventions, they have been at least as paternalistic as bednets and deworming - smallpox eradication, ORT, micronutrient foritification etc.

This shows that paternalism isn't that reliable a marker of lack of effectiveness. Wrt deworming, the issue seems to stem from features particular to deworming, rather than the fact that it is paternalistic.

Comment author: kbog  (EA Profile) 04 May 2017 01:36:12PM *  3 points [-]

Hmm, I don't see how donating goods to individuals even counts as paternalism in the first place, since you're not preventing them from making any choices they would have otherwise made. It's not like we are forcing them to buy bed nets with their own money, for instance, or even forcing them to use the bed nets. At most you could say that by not giving cash you are failing to maximize autonomy, but that's different from paternalism, and that's not even something that people who value autonomy usually think is an obligation, as far as I can tell. The only reference you gave of anyone who has brought up this idea comes from a couple of Facebook founders (also the link is not working so I can't see it).

Comment author: Halstead 04 May 2017 06:24:20PM *  2 points [-]

Hi, thanks for the comments.

  1. As you say, and as I make a point of saying in the article, there is an important difference between deworming/bednets and other things like tobacco taxes wrt paternalism. Still, it is plausible that deworming/bednets are relatively more paternalistic than cash, for the reasons I explain in the piece. Cash theoretically gives people more options. Provided goods are available in the market, it lets them choose what good they want to consume. Offering them bednets only gives them one option. There seems a clear sense in which this is paternalistic. Indeed, doing so is often justified by appealing to the irrationality of the recipients.

  2. For those who value autonomy, the fact that an option A produces extra autonomy always counts as a reason to do A, even if that reason does not entail an obligation. In the piece, I only talk about reasons, not obligations.

  3. I acutally gave two references - one of them is to a political philosopher in a video that has been viewed 11,000 times. The other is to Dustin Moskovitz, who helped set up Good Ventures, which accounts for the vast majority of money within the EA community. His value assumptions are therefore disproportionately important. A quick google of "paternalism givedirectly" yields lots of results, and in a recent facebook thread in the EA group, paternalism was frequently raised as the reason to donate to GD. Also, the GiveWell piece Iinked to - http://blog.givewell.org/2012/05/30/giving-cash-versus-giving-bednets/ - explicitly and up front discusses paternalism as a possible justification of GD. Numerous commenters on this piece extensively discuss the paternalism angle. Moreover, paternalism is independently one of the most obvious justifications for donating to GD.

(The links work for me.)

Comment author: MichaelPlant 31 March 2017 01:20:10PM 0 points [-]

That's helpful, thanks.

Incorporating your suggestion then, when people start to intuition joust perhaps a better idea than the two I mentioned would be to try and debunk each others intuitions.

Do people think this debunking approach can go all the way? If it doesn't, it looks like a more refined version of the problem still recurs.

Particularly interesting stuff about prioritarianism.

Comment author: Halstead 31 March 2017 02:06:10PM 1 point [-]

It's a difficult question when we can stop debunking and what counts as successful debunking. But this is just to say that moral epistemology is difficult. I have my own views and what can and can't be debunked. e.g. I don't see how you could debunk the intuition that searing pain is bad. But this is a massive issue.

Comment author: MichaelPlant 31 March 2017 11:11:59AM 3 points [-]

Thanks for this Michelle. I don't think I've quite worked out how to present what I mean, which is probably why it isn't clear.

To try again, what I'm alluding to are argumentative scenarios where X and Y are disagreeing, and it's apparent to both of them that X know what view he/she hold, what its weird implications are and X still accepts the view as being, on balance, right.

Intuition jousting is where Y then says things like "but that's nuts!" Note Y isn't providing an argument now. It's a purely rhetorical move that uses social pressure ("I don't want people to think I'm nuts") to try and win the argument. I don't think conversations are very interesting at this stage or useful. Note also that X is able to turn this around on Y to say "but your view has different weird implications of its own, and that's more nuts!" It's like a joust because the two people are just testing who's able to hold on to their view under the pressure from the other.

I suppose Y could counter-counter attack X and say "yeah, but more people who have thought about this deeply agree with me". It's not clear what logical (rather than rhetorical) force this adds. It seems like 'deeply' would, in any case, being doing most of the work in that scenario.

I'm somewhat unsure how to think about moral truth here. However, if you do think this is one moral truth to be found, I would think you would really want to understand people who disagree with you in case you might be wrong. As a practical matter, this speaks strongly in favour of engaging in considerate, polite and charitable disagreement ("intuition exchanging") rather than intuition jousting anyway. From my anecdata, there is both types in the EA community and it's only the jousting variety I object to.

Comment author: Halstead 31 March 2017 12:23:47PM 1 point [-]

Appealing to rhetoric in this way is, I agree, unjustifiable. But I thought there might be a valid point that tacked a bit closer to the spirit of your original post. There is no agreed methodology in moral philosophy, which I think explains a lot of persisting moral disagreement. People eventually start just trading which intuitions they think are the most plausible - "I'm happy to accept the repugnant conclusion, not the sadistic one" etc. But intuitions are ten a penny so this doesn't really take us very far - smart people have summoned intuitions against the analytical truth that betterness is transitive.

What we really need is an account of which moral intuitions ought to be held on to and which ones we should get rid of. One might appeal to cognitive biases, to selective evolutionary debunking arguments, and so on. e.g...

  1. One might resist prioritarianism by noting that people seemlessly shift from accepting that resources have diminishing marginal utility to accepting that utility has diminishing marginal utility. People have intuitions about diminishing utility with respect to that same utility, which makes no sense - http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.174.5213&rep=rep1&type=pdf.

  2. Debunk an anti-aggregative view by appealing to people's failure to grasp large numbers.

  3. Debunk an anti-incest norm by noting that it is explained by evolutionary selective pressure rather than apprehension of independent normative truth.

You might want to look at Huemer's stuff on intuitionism. - https://www.cambridge.org/core/journals/social-philosophy-and-policy/article/revisionary-intuitionism/EE5C8F3B9F457168029C7169BA1D62AD

View more: Next