Comment author: Cornelius  (EA Profile) 11 May 2017 12:43:46AM *  9 points [-]

This is a great post and I thank you for taking the time to write it up.

I ran an EA club at my university and ran a workshop where we covered all the philosophical objections to Effective Altruism. All objections were fairly straightforward to address except for one which - in addressing it - seemed to upend how many participants viewed EA, given what image they thus far had of EA. That objection is: Effective Altruism is not that effective.

There is a lot to be said for this objection and I highly highly recommend anyone who calls themselves an EA to read up on it here and here. None of the other objections to EA seem to me to have nearly as much moral urgency as this one. If we call this thing we do EA and it is not E I see a moral problem. If you donate to deworming charities and have never heard of wormwars I also recommend taking a look at this which is an attempt to track the entire debacle of "deworming-isn't-that-effective" controversy in good faith.

Disclaimer: I donate to SCI and rank it near the top of my priorities, just below AMF currently. I even donate to less certain charities like ACE's recommendations. So I certainly don't mean to dissuade anyone from donating in this comment. Reasoning under uncertainty is a thing and you can see these two recent posts if you desire insight into how an EA might try to go about it effectively.

The take home of this though is the same as the three main points raised by OP. If it had been made clear to us from the get-go what mechanisms are at play that determine how much impact an individual has with their donation to an EA recommended charity, then this EA is not E objection would have been as innocuous as the rest. Instead, after addressing this concern and setting straight how things actually work (I still don't completely understand it, it's complicated) participants felt their initial exposure to EA (such as through the guide dog example and other over-simplified EA infographics that strongly imply it's as simple and obvious as: "donation = lives saved") contained false advertising. The words slight disillusionment comes to mind, given these were all dedicated EAs going into the workshop.

So yes, I bow down to the almighty points bestowed by OP:

  • many of us were overstating the point that money goes further in poor countries

  • many of us don’t do enough fact checking, especially before making public claims

  • many of us should communicate uncertainty better

Btw, Scope insensitive link does not seem to work I'm afraid (Update: Thanx for fixing!)

Comment author: DonyChristie 11 May 2017 02:44:34PM 1 point [-]

Overstatement seems to be selected for when 1) evaluators like Givewell are deferred-to rather than questioned, 2) you want to market that Faithful Deferrence to others.

Comment author: DonyChristie 11 May 2017 12:02:49AM 2 points [-]

One, last thing that was emphasised multiple times, was the value of asking. Most of us still don’t do it enough. If one doesn’t understand something, if we don’t know where to start, if we want people to support us - there’s one simple trick: just ask. We tend to feel like there is some social cost to asking but simply asking can provide extremely valuable support and only has downsides if we do it too often. So far, most of us don’t do it enough.

Great point! Relatedly, for next EAG I want to have a small set of questions to ask people (last time it was "What do you think of insect suffering?"). I would highly recommend others come in with thought-provoking questions they want answered. Brainstorming:

  1. What is one trait/idea/action/ you wished the EA community had/understood/took? (or insert "people" for "EA community)
  2. What is one book you would recommend and why?
  3. Takes out $5 Where should I donate this?
  4. What do you need help with in your life?
  5. What are you most uncertain about?
  6. What would make you change your cause prioritization?
  7. Who do you most look up to?

Warning: At EAG Berkeley at least, not everyone you meet will be able to answer all of these questions because they're newer to the movement - adapt accordingly.

Perhaps print off some business cards (even crude, informal ones), with whatever information you wish to impart to people - contact or otherwise.

Try to do a summary of the day before you go to sleep (or just the trip). I made a post on Facebook and tagged some of the people I met (you will have to friend them first for the tag).

Any other ideas? :-)

Comment author: BenHoffman 05 May 2017 09:08:45AM 4 points [-]

Consider paternalism as a proxy for model error rather than an intrinsic dispreference. We should wonder whether maybe the things we do to people are more likely to cause hidden harm or lack supposed benefits, than things they do for examples.

Deworming is an especially stark example. The mass drug administration program is to go to schools and force all the children, whether sick or healthy, to swallow giant poisonous pills that give them bellyaches, because we hope killing the worms in this way buys big life outcome improvements. GiveWell estimates the effect at about 1.5% of what studies say, but EV is still high. This could involve a lot of unnecessary harm too via unnecessary treatments.

By contrast, the less paternalistic Living Goods (a recent GiveWell "standout charity") sells deworming pills (at or near cost) so we should expect better targeting to kids sick with worms, and repeat business is more likely if the pills seem helpful.

I wrote a bit about this here:

Comment author: DonyChristie 07 May 2017 01:30:31AM *  0 points [-]

'Do No Harm - on Problem Solving and Design' talks about fixer solutions vs. solver solutions. Its key points:

  • When solving a problem, are you looking for a fix or are you looking for the cause?
  • For a complex system to be resilient, it must (in any practical sense) be comprised of a collection of simpler, resilient parts.
  • The aim of a resilient part is to normalize (make consistent) the things that matter, and to minimize (dampen or hide) the things that don't.
  • If you can't solve an existing problem, how do you know you aren't causing more?
  • Resilience invariably relies upon feedback loops, and so the variables involved must be free to move through some critical range or the feedback is broken and the resilience is lost.

Its concluding paragraph:

The bottom line is: there is no escaping the need to make best effort to understand the whys and wherefores. To skip this and go straight for the fix is not humbly admitting the system is too complex to understand, it is arrogantly assuming we understand it well enough to fix it without breaking it. The humble thing to do is less, not more--to respect the difficulty in keeping any complex system stable, and the degree to which doing so relies upon a best effort by, and freedom of, each component to self-regulate.

Highly recommended read! :D

Comment author: DonyChristie 02 May 2017 07:00:19AM 3 points [-]

The Life You Can Save should become cause-neutral and recommend effective interventions that are from cause areas other than global poverty (e.g. animal welfare charities like The Humane League, or global catastrophic risk charities like The Ploughshares Fund).


Comment author: Peter_Hurford  (EA Profile) 23 April 2017 07:16:06PM 1 point [-]

I don't understand the question.

Comment author: DonyChristie 24 April 2017 09:38:50PM 0 points [-]

Allocating grants according to a ranked preference vote of an arbitrary amount of people (and having them write up their arguments); what is the optimal number here? Where is the inflection point where adding more people decreases the quality of the grants?

On tertiary reading I somewhat misconstrued "three fund managers" as "three fund managers per fund" rather than "the three fund managers we have right now (Nick, Elie, Lewis)", but the possibility is still interesting with any variation.

Comment author: RyanCarey 23 April 2017 11:08:18PM *  18 points [-]

Some feedback on your feedback (I've only quickly read your post once, so take it with a grain of salt):

  • I think that this is more discursive than it needs to be. AFAICT, you're basically arguing that you think that decision-making and trust in the EA movement is a over-concentrated in OpenPhil.
  • If it was a bit shorter, then it would also be easier to run it by someone involved with OpenPhil, which prima facie would be at least worth trying, in order to correct any factual errors.
  • It's hard to do good criticism, but starting out with long explanations of confidence games and Ponzi schemes is not something that makes the criticism likely to be well-received. You assert that these things are not necessarily bad, so why not just zero in on the thing that you think is bad in this case?
  • So maybe this could have been split into two posts?
  • Maybe there are more upsides of having somewhat concentrated decision-making than you lead on? Perhaps cause prioritization will be better? Since EA funds is a movement-wide scheme, perhaps reputational trust is extra important here, and the diversification would come from elsewhere? Perhaps the best decision-makers will naturally come to work on this full-time.

You may still be right, though I would want some more balanced analysis.

Comment author: DonyChristie 24 April 2017 06:43:58AM 8 points [-]

I enjoyed the SSC-style length and thought it helpful in fully explicating his arguments. :) It may be the case that an artificially-shortened version of the post would not be as listened-to. But perhaps a TL;DR could go at the top.

Comment author: Peter_Hurford  (EA Profile) 22 April 2017 02:08:29AM 5 points [-]

Or maybe allocate grants according to a ranked preference vote of the three fund managers, plus have them all individually and publicly write up their reasoning and disagreements? I'd like that a lot.

Comment author: DonyChristie 23 April 2017 07:04:02PM 0 points [-]

Or maybe allocate grants according to a ranked preference vote of the three fund managers, plus have them all individually and publicly write up their reasoning and disagreements?

Serious question: What do you think of N fund managers in your scenario?

Comment author: DonyChristie 03 April 2017 03:54:09PM 0 points [-]

This font is really hard to read.

Comment author: Raemon 25 March 2017 10:32:07PM *  7 points [-]

Thanks for doing this!

My sense is what people are missing is a set of social incentives to get started. Looking at any one of these, they feel overwhelming, they feel like they require skills that I don't have. It feels like if I start working on it, then EITHER I'm blocking someone whose better qualified from working on it OR someone who's better qualified will do it anyway and my efforts will be futile.

Or, in the case of research, my bad quality research will make it harder for people to find good quality research.

Or, in the case of something like "start one of the charities Givewell wants people to start", it feels like... just, a LOT of work.

And... this is all true. Kind of. But it's also true that the way people get good at things is by doing them. And I think it's sort of necessary for people to throw themselves into projects they aren't prepared for, as long as they can get tight feedback looks that enable them to improve.

I have half-formed opinions about what's needed to resolve that, that can be summarized as "better triaged mentorship." I'll try to write up more detailed thoughts soon.

Comment author: DonyChristie 27 March 2017 06:48:24PM 0 points [-]

Please do! Have you gotten started yet? :-) #humancommitmentdevice

In response to Open Thread #36
Comment author: DonyChristie 15 March 2017 11:00:20PM 2 points [-]

I've noticed a sizeable minority of posts in this forum have a font that is difficult for me to read. It's the second most-used font after the font in the OP. Does anyone know what it is?

I would recommend not using that font, personally.

View more: Next