Comment author: PeterSinger 12 May 2017 11:31:18PM 6 points [-]

I don't understand the objection about it being "ableist" to say funding should go towards preventing people becoming blind rather than training guide dogs

If "ableism" is really supposed to be like racism or sexism, then we should not regard it as better to be able to see than to have the disability of not being able to see. But if people who cannot see are no worse off than people who can see, why should we even provide guide dogs for them? On the other hand, if -- more sensibly -- disability activists think that people who are unable to see are at a disadvantage and need our help, wouldn't they agree that it is better to prevent many people -- say, 400 -- experiencing this disadvantage than to help one person cope a little better with the disadvantage? Especially if the 400 are living in a developing country and have far less social support than the one person who lives in a developed country?

Can someone explain to me what is wrong with this argument? If not, I plan to keep using the example.

Comment author: BenHoffman 13 May 2017 02:18:48AM 1 point [-]

If I try to steelman the argument, it comes out something like:

Some people, when they hear about the guide dog - tracheoma surgery contrast, will take the point to be that ameliorating a disability is intrinsically less valuable than preventing or curing an impairment. (In other words, that helping people live fulfilling lives while blind is necessarily a less worthy cause than "fixing" them.) Since this is not in fact the intended point, a comparison of more directly comparable interventions would be preferable, if available.

Comment author: Kerry_Vaughan 08 May 2017 05:33:28PM 3 points [-]

Hey, Ben. Just wanted to note that I found this very helpful. Thank you.

Comment author: BenHoffman 08 May 2017 06:33:38PM 2 points [-]

I imagine this has been stressful for all sides, and I do very much appreciate you continuing to engage anyway! I'm looking forward to seeing what happens in the future.

Comment author: BenHoffman 08 May 2017 04:13:54PM *  1 point [-]

Thanks for writing this! It's really helpful to have the basics of what the medical community knows.

I've been trying to figure out how to help in ways that respect neurodiversity. Psychosis and mania, like other mental conditions, aren't just the result of some exogenous force - they're the brain doing too little or too much of some particular things it was already doing.

So someone going through a psychotic episode might at times have delusions that seem to their friends to be genuinely poetic, insightful, and important, and this impression might be right. And yet, they're still having trouble tracking what's real and what's just a thought they had, worse at caring for themselves, and really need to eat and get a good night's sleep and friends to help them remember to do this.

Comment author: Kerry_Vaughan 27 April 2017 08:35:57PM *  6 points [-]

But the right thing to do, if you want to persuade people to delegate their giving decisions to Nick Beckstead, is to make a principled case for delegating giving decisions to Nick Beckstead.

I just want to note that we have tried to make this case.

The fund page for the Long-Term Future and EA Community funds includes an extensive list of organizations Nick has funded in the past and of his online writings.

In addition, our original launch post contained the following section:

Strong track record for finding high-leverage giving opportunities: the EA Giving Group DAF

The initial Long-Term Future and Effective Altruism Community funds will be managed by Nick Beckstead, a Program Officer at the Open Philanthropy Project who has helped advise a large private donor on donation opportunities for several years. The donor-advised fund (DAF) Nick manages was an early funder of CSER, FLI, Charity Entrepreneurship and Founders Pledge. A list of Nick’s past funding is available in his biography on this website.

We think this represents a strong track record, although the Open Philanthropy Project’s recent involvement in these areas may make it harder for the fund to find promising opportunities in the future.

Donors can give to the DAF directly by filling out this form and waiting for Nick to contact you. If you give directly the minimum contribution is $5,000. If you give via the EA Funds there is no minimum contribution and you can give directly online via credit/debit card, ACH, or PayPal. Nick's preference is that donors use the EA Funds to contribute.

Disclaimer: Nick Beckstead is a trustee of CEA. CEA has been a large recipient of the EA Giving Group DAFs funding in the past and is a potential future recipient of money allocated to the Movement Building fund.

My guess is that you feel that we haven't made the case for delegating to Nick as strongly or as prominently as we ought to. If so, I'd love some more specific feedback on how we can improve.

Comment author: BenHoffman 08 May 2017 03:59:55PM *  5 points [-]

Kerry,

I think that in a writeup for the two funds Nick is managing, CEA has done a fine job making it clear what's going on. The launch post here on the Forum was also very clear.

My worry is that this isn't at all what someone attracted by EA's public image would be expecting, since so much of the material is about experimental validation and audit.

I think that there's an opportunity here to figure out how to effectively pitch far-future stuff directly, instead of grafting it onto existing global-poverty messaging. There's a potential pitch centered around: "Future people are morally relevant, neglected, and extremely numerous. Saving the world isn't just a high-minded phrase - here are some specific ways you could steer the course of the future a lot." A lot of Nick Bostrom's early public writing is like this, and a lot of people were persuaded by this sort of thing to try to do something about x-risk. I think there's a lot of potential value in figuring out how to bring more of those sorts of people together, and - when there are promising things in that domain to fund - help them coordinate to fund those things.

In the meantime, it does make sense to offer a fund oriented around the far future, since many EAs do share those preferences. I'm one of them, and think that Nick's first grant was a promising one. It just seems off to me to aggressively market it as an obvious, natural thing for someone who's just been through the GWWC or CEA intro material to put money into. I suspect that many of them would have valid objections that are being rhetorically steamrollered, and a strategy of explicit persuasion has a better chance of actually encountering those objections, and maybe learning from them.

I recognize that I'm recommending a substantial strategy change, and it would be entirely appropriate for CEA to take a while to think about it.

Comment author: remmelt  (EA Profile) 27 April 2017 10:07:51AM *  4 points [-]

I thought this was a really useful framework to look at the system-level. Thank you for posting this!

Quick points after just reading through it:

1) Your phrasing seems to convey too much certainty to me/flowed too much into a coherent story. I'm not sure if you did this too strongly bring across your points or because that's the confidence level you have in your arguments.

2)

If you want to acquire control over something, that implies that you think you can manage it more sensibly than whoever is in control already.

To me, it appears that you view Holden's position of influence at Open AI as something like a zero-sum alpha investment decision (where his amount of control replaces someone else's commensurate control). I don't see why Holden also couldn't have a supportive role where his feedback and different perspectives can help Open AI correct for aspects they've overlooked.

3) Overall principle I got from this: correct for model error through external data and outside views.

Comment author: BenHoffman 08 May 2017 03:45:42PM 1 point [-]

I don't see why Holden also couldn't have a supportive role where his feedback and different perspectives can help Open AI correct for aspects they've overlooked.

I agree this can be the case, and that in the optimistic scenario this is a large part of OpenAI's motivation.

Comment author: AGB 07 May 2017 11:42:00AM 2 points [-]

I found the post, was struggling before because it's actually part of their career guide rather than a blog post.

Comment author: BenHoffman 08 May 2017 03:41:15PM 0 points [-]

Thanks! On a first read, this seems pretty clear and much more like the sort of thing I'd hope to see in introductory material.

Comment author: AGB 06 May 2017 01:55:28PM *  1 point [-]

Thanks for digging up those examples.

EffectiveAltruism.org's Introduction to Effective Altruism allocates most of its words to what's effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.

I think 'many methods of doing good fail' has wide applications outside of Global Poverty, but I acknowledge the wider point you're making.

Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.

This is a problem I definitely worry about. There was a recent post by 80,000 hours (which annoyingly I now can't find) describing how their founders' approaches to doing good have evolved and updated over the years. Is that something you'd like to see more of?

It's very plausible to me that in-person EA groups often don't have this problem because individuals don't feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.

This is a true dynamic, but to be specific about one of the examples I had in mind: A little before your post was written I was helping someone craft a general 'intro to EA' that they would give at a local event, and we both agreed to make the heterogeneous nature of the movement central to the mini speech without even discussing it. The discussion we had was more about 'which causes and which methods of doing good should we list given limited time', rather than 'which cause/method would provide the most generically effective pitch'.

We didn't want to do the latter for the reason I already gave; coming up with a great 5-minute poverty pitch is worthless-to-negative if the next person a newcomer talks to is entirely focused on AI, and with a diversity of cause areas represented among the 'core' EAs in the room that was a very real risk.

Comment author: BenHoffman 06 May 2017 08:28:31PM *  1 point [-]

There was a recent post by 80,000 hours (which annoyingly I now can't find) describing how their founders' approaches to doing good have evolved and updated over the years. Is that something you'd like to see more of?

Yes! More clear descriptions of how people have changed their mind would be great. I think it's especially important to be able to identify which things we'd hoped would go well but didn't pan out - and then go back and make sure we're not still implicitly pitching that hope.

Comment author: Halstead 05 May 2017 05:08:02PM 1 point [-]

I think I agree with maybe having a sceptical prior for paternalistic interventions, but I'm unsure about how strong such a prior would be. The information on what has worked in the past would determine the prior I should have when assessing a new intervention. If I looked at all past public health interventions and paternalism was not correlated at all with quality of outcome, even correcting for reasonable unknown side-effects, then it seems like I should give paternalism very little weight when assessing a new intervention. My examples were a bit cherry-picked, but they do show that if you look at the tail of the distribution of interventions in terms of impact, they tend to be paternalistic.

However, I suspect there is something of a correlation between paternalism and outcomes: I suspect nearly all or all of the ineffectual/harmful interventions have been paternalistic - playpump etc. This is borne out by the fact that GD is better than most other anti-poverty interventions. Then you have to take in the risk of hidden costs/harms, as you say.

But there are also factors pushing the other way - e.g. biases about spending on personal health, positive externalities etc - that counterbalance a presumption against paternalism.

Comment author: BenHoffman 05 May 2017 08:45:53PM *  1 point [-]

But there are also factors pushing the other way - e.g. biases about spending on personal health, positive externalities etc - that counterbalance a presumption against paternalism.

It's not obvious to me that the "near" bias about one's own health is generically worse than our "far" bias about what to do about the health of people far away. For instance, we might have a bias towards action that's not shared by, e.g., the children who feel sick after their worm chemo, or getting bit by mosquitos through their supposedly mosquito-proof bednets. (I'm not sure how bad either of these problems are relative to the benefits, and that's the problem - we really don't know. I'll note that Living Goods does sell some deworming pills, so at least some people in poor countries think it's in their interest to take them.)

It's also not obvious that positive externalities are generically more likely with paternalistic interventions. For instance, in a recent Reddit AMA, GiveDirectly basic income recipients reported that there was much less social conflict in their community once people started receiving basic income - they started imposing fewer costs on each other once they were more secure in meeting their basic needs.

It does seem to me like each of these considerations - if it points in the right direction for any given comparison - could contribute to overcoming the paternalism objection.

Comment author: Halstead 05 May 2017 05:08:02PM 1 point [-]

I think I agree with maybe having a sceptical prior for paternalistic interventions, but I'm unsure about how strong such a prior would be. The information on what has worked in the past would determine the prior I should have when assessing a new intervention. If I looked at all past public health interventions and paternalism was not correlated at all with quality of outcome, even correcting for reasonable unknown side-effects, then it seems like I should give paternalism very little weight when assessing a new intervention. My examples were a bit cherry-picked, but they do show that if you look at the tail of the distribution of interventions in terms of impact, they tend to be paternalistic.

However, I suspect there is something of a correlation between paternalism and outcomes: I suspect nearly all or all of the ineffectual/harmful interventions have been paternalistic - playpump etc. This is borne out by the fact that GD is better than most other anti-poverty interventions. Then you have to take in the risk of hidden costs/harms, as you say.

But there are also factors pushing the other way - e.g. biases about spending on personal health, positive externalities etc - that counterbalance a presumption against paternalism.

Comment author: BenHoffman 05 May 2017 08:33:59PM *  1 point [-]

It sounds like we might be coming close to agreement. The main thing I think is important here, is taking seriously the notion that paternalism is evidence about the other things we care about, and thus an important instrumental proxy goal, not just something we have intrinsic preferences about. More generally the thing I'm pushing back against is treating every moral consideration as though it were purely an intrinsic value to be weighed against other intrinsic values.

I see people with a broadly utilitarian outlook doing this a lot, perhaps because people from other moral perspectives don't have a lot of practice grounding their moral intuitions in a way that is persuasive to utilitarians. Autonomy in particular is something where we need to distinguish purely intrinsic considerations (e.g. factory farmed animals are unhappy because they have little physical autonomy) from instrumental pragmatic considerations (e.g. interventions that give poor people more autonomy preserve information by letting them use local knowledge that we do not have, while paternalistic interventions overwrite local information).

Thus, we should think about requiring higher impact for paternalism interventions as building in a margin for error, not just outweighing the anti-paternalism intuition. If a paternalistic intervention has strong evidence of a large benefit, it makes sense to describe it as overcoming the paternalism objection, but not rebutting it - we should still be skeptical relative to a nonpaternalistic intervention with the same evidence, it's just that sometimes we should intervene anyway.

Comment author: Halstead 05 May 2017 12:05:26PM *  2 points [-]

I agree that there might be instrumental concerns about paternalistic interventions, especially where we have limited information about how recipients act. However, these concerns do not always seem to be decisive about the effectiveness of interventions in terms of producing welfare. e.g. mandatory childhood vaccination is highly cost-effective notwithstanding its paternalism; same goes for tobacco taxes, mandatory seatbelt legislation, etc. When you look back at the most successful public health interventions, they have been at least as paternalistic as bednets and deworming - smallpox eradication, ORT, micronutrient foritification etc.

This shows that paternalism isn't that reliable a marker of lack of effectiveness. Wrt deworming, the issue seems to stem from features particular to deworming, rather than the fact that it is paternalistic.

Comment author: BenHoffman 05 May 2017 03:42:53PM 1 point [-]

You're assuming the premise here a bit - that the data collected don't leave out important negative outcomes. In the particular cases you mentioned (tobacco taxes, mandatory seatbelt legislation, smallpox eradication, ORT, micronutrient foritification) my sense is that in most cases the benefits have been very strong, strong enough to outweigh a skeptical prior on paternalist interventions. But that doesn't show that we shouldn't have the skeptical prior in the first place. Seeing Like A State shows some failures, we should think of those too.

View more: Prev | Next