Comment author: BenHoffman 21 May 2017 11:29:59PM 4 points [-]

Our prior strongly punishes MIRI. While the mean of its evidence distribution is 2,053,690,000 HEWALYs/$10,000, the posterior mean is only 180.8 HEWALYs/$10,000. If we set the prior scale parameter to larger than about 1.09, the posterior estimate for MIRI is greater than 1038 HEWALYs/$10,000, thus beating 80,000 Hours.

This suggests that it might be good in the long run to have a process that learns what prior is appropriate, e.g. by going back and seeing what prior would have best predicted previous years' impact.

Comment author: BenHoffman 21 May 2017 11:26:35PM *  2 points [-]

Regrettably, we were not able to choose shortlisted organisations as planned. My original intention was that we would choose organisations in a systematic, principled way, shortlisting those which had highest expected impact given our evidence by the time of the shortlist deadline. This proved too difficult, however, so we resorted to choosing the shortlist based on a mixture of our hunches about expected impact and the intellectual value of finding out more about an organisation and comparing it to the others.

[...]

Later, we realised that understanding the impact of the Good Food Institute was too difficult, so we replaced it with Animal Charity Evaluators on our shortlist. Animal Charity Evaluators finds advocates for highly effective opportunities to improve the lives of animals.

If quantitative models were used for these decisions I'd be interested in seeing them.

Comment author: vollmer 11 May 2017 07:30:20PM *  5 points [-]

I agree with those concerns.

In addition, some people might perceive the "guide dogs vs. trachoma surgeries" example as ableist, or might think that EAs are suggesting that governments spend less on handicapped people and more on foreign aid. (This is a particularly significant issue in Germany, where there have been lots of protests by disability rights advocates against Singer, also more recently when he gave talks about EA.)

In fact, one of the top google hits for "guide dog vs trachoma surgery" is this:

The philosopher says funding should go toward prevention instead of guide-dog training. Activists for the blind, of course, disagree.

For these reasons, I suggest not using the guide dog example at all anymore.

The above article also makes the following, interesting point:

Many people are able to function in society at a much higher level than ever before because of service dogs and therapy dogs. You would think that’s a level of utility that would appeal to Singer, but he seems to have a blind spot of his own in that respect.

This suggests that both guide dogs and trachoma surgeries cause significant flow-through effects. All of these points combined might decrease the effectiveness difference from 1000x to something around 5x-50x (see also Why Charities Don't Differ Astronomically in Cost-Effectiveness).

Comment author: BenHoffman 13 May 2017 02:22:40AM 1 point [-]

On the ableism point, my best guess is that the right response is to figure out the substance of the criticism. If we disagree, we should admit that openly, and forgo the support of people who do not in fact agree with us. If we agree, then we should account for the criticism and adjust both our beliefs and statements. Directly optimizing on avoiding adverse perceptions seems like it would lead to a distorted picture of what we are about.

Comment author: PeterSinger 12 May 2017 11:31:18PM 6 points [-]

I don't understand the objection about it being "ableist" to say funding should go towards preventing people becoming blind rather than training guide dogs

If "ableism" is really supposed to be like racism or sexism, then we should not regard it as better to be able to see than to have the disability of not being able to see. But if people who cannot see are no worse off than people who can see, why should we even provide guide dogs for them? On the other hand, if -- more sensibly -- disability activists think that people who are unable to see are at a disadvantage and need our help, wouldn't they agree that it is better to prevent many people -- say, 400 -- experiencing this disadvantage than to help one person cope a little better with the disadvantage? Especially if the 400 are living in a developing country and have far less social support than the one person who lives in a developed country?

Can someone explain to me what is wrong with this argument? If not, I plan to keep using the example.

Comment author: BenHoffman 13 May 2017 02:18:48AM 1 point [-]

If I try to steelman the argument, it comes out something like:

Some people, when they hear about the guide dog - tracheoma surgery contrast, will take the point to be that ameliorating a disability is intrinsically less valuable than preventing or curing an impairment. (In other words, that helping people live fulfilling lives while blind is necessarily a less worthy cause than "fixing" them.) Since this is not in fact the intended point, a comparison of more directly comparable interventions would be preferable, if available.

Comment author: Kerry_Vaughan 08 May 2017 05:33:28PM 3 points [-]

Hey, Ben. Just wanted to note that I found this very helpful. Thank you.

Comment author: BenHoffman 08 May 2017 06:33:38PM 2 points [-]

I imagine this has been stressful for all sides, and I do very much appreciate you continuing to engage anyway! I'm looking forward to seeing what happens in the future.

Comment author: BenHoffman 08 May 2017 04:13:54PM *  1 point [-]

Thanks for writing this! It's really helpful to have the basics of what the medical community knows.

I've been trying to figure out how to help in ways that respect neurodiversity. Psychosis and mania, like other mental conditions, aren't just the result of some exogenous force - they're the brain doing too little or too much of some particular things it was already doing.

So someone going through a psychotic episode might at times have delusions that seem to their friends to be genuinely poetic, insightful, and important, and this impression might be right. And yet, they're still having trouble tracking what's real and what's just a thought they had, worse at caring for themselves, and really need to eat and get a good night's sleep and friends to help them remember to do this.

Comment author: Kerry_Vaughan 27 April 2017 08:35:57PM *  6 points [-]

But the right thing to do, if you want to persuade people to delegate their giving decisions to Nick Beckstead, is to make a principled case for delegating giving decisions to Nick Beckstead.

I just want to note that we have tried to make this case.

The fund page for the Long-Term Future and EA Community funds includes an extensive list of organizations Nick has funded in the past and of his online writings.

In addition, our original launch post contained the following section:

Strong track record for finding high-leverage giving opportunities: the EA Giving Group DAF

The initial Long-Term Future and Effective Altruism Community funds will be managed by Nick Beckstead, a Program Officer at the Open Philanthropy Project who has helped advise a large private donor on donation opportunities for several years. The donor-advised fund (DAF) Nick manages was an early funder of CSER, FLI, Charity Entrepreneurship and Founders Pledge. A list of Nick’s past funding is available in his biography on this website.

We think this represents a strong track record, although the Open Philanthropy Project’s recent involvement in these areas may make it harder for the fund to find promising opportunities in the future.

Donors can give to the DAF directly by filling out this form and waiting for Nick to contact you. If you give directly the minimum contribution is $5,000. If you give via the EA Funds there is no minimum contribution and you can give directly online via credit/debit card, ACH, or PayPal. Nick's preference is that donors use the EA Funds to contribute.

Disclaimer: Nick Beckstead is a trustee of CEA. CEA has been a large recipient of the EA Giving Group DAFs funding in the past and is a potential future recipient of money allocated to the Movement Building fund.

My guess is that you feel that we haven't made the case for delegating to Nick as strongly or as prominently as we ought to. If so, I'd love some more specific feedback on how we can improve.

Comment author: BenHoffman 08 May 2017 03:59:55PM *  4 points [-]

Kerry,

I think that in a writeup for the two funds Nick is managing, CEA has done a fine job making it clear what's going on. The launch post here on the Forum was also very clear.

My worry is that this isn't at all what someone attracted by EA's public image would be expecting, since so much of the material is about experimental validation and audit.

I think that there's an opportunity here to figure out how to effectively pitch far-future stuff directly, instead of grafting it onto existing global-poverty messaging. There's a potential pitch centered around: "Future people are morally relevant, neglected, and extremely numerous. Saving the world isn't just a high-minded phrase - here are some specific ways you could steer the course of the future a lot." A lot of Nick Bostrom's early public writing is like this, and a lot of people were persuaded by this sort of thing to try to do something about x-risk. I think there's a lot of potential value in figuring out how to bring more of those sorts of people together, and - when there are promising things in that domain to fund - help them coordinate to fund those things.

In the meantime, it does make sense to offer a fund oriented around the far future, since many EAs do share those preferences. I'm one of them, and think that Nick's first grant was a promising one. It just seems off to me to aggressively market it as an obvious, natural thing for someone who's just been through the GWWC or CEA intro material to put money into. I suspect that many of them would have valid objections that are being rhetorically steamrollered, and a strategy of explicit persuasion has a better chance of actually encountering those objections, and maybe learning from them.

I recognize that I'm recommending a substantial strategy change, and it would be entirely appropriate for CEA to take a while to think about it.

Comment author: remmelt  (EA Profile) 27 April 2017 10:07:51AM *  4 points [-]

I thought this was a really useful framework to look at the system-level. Thank you for posting this!

Quick points after just reading through it:

1) Your phrasing seems to convey too much certainty to me/flowed too much into a coherent story. I'm not sure if you did this too strongly bring across your points or because that's the confidence level you have in your arguments.

2)

If you want to acquire control over something, that implies that you think you can manage it more sensibly than whoever is in control already.

To me, it appears that you view Holden's position of influence at Open AI as something like a zero-sum alpha investment decision (where his amount of control replaces someone else's commensurate control). I don't see why Holden also couldn't have a supportive role where his feedback and different perspectives can help Open AI correct for aspects they've overlooked.

3) Overall principle I got from this: correct for model error through external data and outside views.

Comment author: BenHoffman 08 May 2017 03:45:42PM 1 point [-]

I don't see why Holden also couldn't have a supportive role where his feedback and different perspectives can help Open AI correct for aspects they've overlooked.

I agree this can be the case, and that in the optimistic scenario this is a large part of OpenAI's motivation.

Comment author: AGB 07 May 2017 11:42:00AM 2 points [-]

I found the post, was struggling before because it's actually part of their career guide rather than a blog post.

Comment author: BenHoffman 08 May 2017 03:41:15PM 0 points [-]

Thanks! On a first read, this seems pretty clear and much more like the sort of thing I'd hope to see in introductory material.

Comment author: AGB 06 May 2017 01:55:28PM *  1 point [-]

Thanks for digging up those examples.

EffectiveAltruism.org's Introduction to Effective Altruism allocates most of its words to what's effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.

I think 'many methods of doing good fail' has wide applications outside of Global Poverty, but I acknowledge the wider point you're making.

Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.

This is a problem I definitely worry about. There was a recent post by 80,000 hours (which annoyingly I now can't find) describing how their founders' approaches to doing good have evolved and updated over the years. Is that something you'd like to see more of?

It's very plausible to me that in-person EA groups often don't have this problem because individuals don't feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.

This is a true dynamic, but to be specific about one of the examples I had in mind: A little before your post was written I was helping someone craft a general 'intro to EA' that they would give at a local event, and we both agreed to make the heterogeneous nature of the movement central to the mini speech without even discussing it. The discussion we had was more about 'which causes and which methods of doing good should we list given limited time', rather than 'which cause/method would provide the most generically effective pitch'.

We didn't want to do the latter for the reason I already gave; coming up with a great 5-minute poverty pitch is worthless-to-negative if the next person a newcomer talks to is entirely focused on AI, and with a diversity of cause areas represented among the 'core' EAs in the room that was a very real risk.

Comment author: BenHoffman 06 May 2017 08:28:31PM *  1 point [-]

There was a recent post by 80,000 hours (which annoyingly I now can't find) describing how their founders' approaches to doing good have evolved and updated over the years. Is that something you'd like to see more of?

Yes! More clear descriptions of how people have changed their mind would be great. I think it's especially important to be able to identify which things we'd hoped would go well but didn't pan out - and then go back and make sure we're not still implicitly pitching that hope.

View more: Next