In response to 2017 LEAN Statement
Comment author: DonyChristie 28 June 2017 10:31:36PM 2 points [-]

Local group seeding and activation.

Thanks for this link. I may have raised this in a private channel, but I want to take the time to point out that based on anecdotal experience, I think LEAN & CEA shouldn't be seeding groups without taking the time to make sure the groups are mentored, managed by motivated individual(s), and grown to sustainability. I found my local group last year and it was essentially dilapidated. I felt a responsibility to run it but was mostly unsuccessful in establishing contact to obtain help managing it until some time in 2017. I'm predicting these kinds of problems will diminish at least somewhat now that LEAN & Rethink have full-time staff, the Mentoring program, more group calls, etc. :)

Comment author: SteveGreidinger 22 June 2017 05:07:20AM 0 points [-]

This is a good start about some of the issues, but there is a need to bulk it up with information directly from neuroscientists.

For instance, some very senior people in the Stanford neuroscience community think that an essential difference between animals and people may be that the astrocytes, "helper cells," are so very different. Among many other things, astrocytes help to create and destroy synapses.

Neuroscientists also routinely do mice experiments, and a few have very sophisticated answers to ethical questions about what they do.

There are a lot of topics in EA ethics that benefit from a grounding in neuroscience and neuroethics. Both of these fields also contain many EA opportunities themselves. If money is being put down, then it's time to add some expert scientific opinion.

Comment author: DonyChristie 22 June 2017 04:36:57PM 0 points [-]

Be the change you want to see in this world. You are clearly motivated and knowledgeable about the matter enough to try emailing some neuroscientists about the matter. :)

Comment author: DonyChristie 17 June 2017 01:24:45AM 2 points [-]

How can we further coordinate our action on this? Reward people who tweet?

Comment author: DonyChristie 02 June 2017 07:51:46PM 0 points [-]

or we could add a long-term future fund that focused on areas other than AI-Safety.

+1 differentiation. A Fund specifically for AI Safety would probably have demand - I'd donate. Other Funds for other specific GCRs could be created if there's enough demand too.

A mild consideration against would be if there are funding opportunities in the Long Term Future area that would benefit both AI Safety and the other GCRs, such as the cross-disciplinary Global Catastrophic Risks Institute, and splitting would make it harder for these to be funded, maybe?

Comment author: Cornelius  (EA Profile) 11 May 2017 12:43:46AM *  9 points [-]

This is a great post and I thank you for taking the time to write it up.

I ran an EA club at my university and ran a workshop where we covered all the philosophical objections to Effective Altruism. All objections were fairly straightforward to address except for one which - in addressing it - seemed to upend how many participants viewed EA, given what image they thus far had of EA. That objection is: Effective Altruism is not that effective.

There is a lot to be said for this objection and I highly highly recommend anyone who calls themselves an EA to read up on it here and here. None of the other objections to EA seem to me to have nearly as much moral urgency as this one. If we call this thing we do EA and it is not E I see a moral problem. If you donate to deworming charities and have never heard of wormwars I also recommend taking a look at this which is an attempt to track the entire debacle of "deworming-isn't-that-effective" controversy in good faith.

Disclaimer: I donate to SCI and rank it near the top of my priorities, just below AMF currently. I even donate to less certain charities like ACE's recommendations. So I certainly don't mean to dissuade anyone from donating in this comment. Reasoning under uncertainty is a thing and you can see these two recent posts if you desire insight into how an EA might try to go about it effectively.

The take home of this though is the same as the three main points raised by OP. If it had been made clear to us from the get-go what mechanisms are at play that determine how much impact an individual has with their donation to an EA recommended charity, then this EA is not E objection would have been as innocuous as the rest. Instead, after addressing this concern and setting straight how things actually work (I still don't completely understand it, it's complicated) participants felt their initial exposure to EA (such as through the guide dog example and other over-simplified EA infographics that strongly imply it's as simple and obvious as: "donation = lives saved") contained false advertising. The words slight disillusionment comes to mind, given these were all dedicated EAs going into the workshop.

So yes, I bow down to the almighty points bestowed by OP:

  • many of us were overstating the point that money goes further in poor countries

  • many of us don’t do enough fact checking, especially before making public claims

  • many of us should communicate uncertainty better

Btw, Scope insensitive link does not seem to work I'm afraid (Update: Thanx for fixing!)

Comment author: DonyChristie 11 May 2017 02:44:34PM 1 point [-]

Overstatement seems to be selected for when 1) evaluators like Givewell are deferred-to rather than questioned, 2) you want to market that Faithful Deferrence to others.

Comment author: DonyChristie 11 May 2017 12:02:49AM 2 points [-]

One, last thing that was emphasised multiple times, was the value of asking. Most of us still don’t do it enough. If one doesn’t understand something, if we don’t know where to start, if we want people to support us - there’s one simple trick: just ask. We tend to feel like there is some social cost to asking but simply asking can provide extremely valuable support and only has downsides if we do it too often. So far, most of us don’t do it enough.

Great point! Relatedly, for next EAG I want to have a small set of questions to ask people (last time it was "What do you think of insect suffering?"). I would highly recommend others come in with thought-provoking questions they want answered. Brainstorming:

  1. What is one trait/idea/action/ you wished the EA community had/understood/took? (or insert "people" for "EA community)
  2. What is one book you would recommend and why?
  3. Takes out $5 Where should I donate this?
  4. What do you need help with in your life?
  5. What are you most uncertain about?
  6. What would make you change your cause prioritization?
  7. Who do you most look up to?

Warning: At EAG Berkeley at least, not everyone you meet will be able to answer all of these questions because they're newer to the movement - adapt accordingly.

Perhaps print off some business cards (even crude, informal ones), with whatever information you wish to impart to people - contact or otherwise.

Try to do a summary of the day before you go to sleep (or just the trip). I made a post on Facebook and tagged some of the people I met (you will have to friend them first for the tag).

Any other ideas? :-)

Comment author: BenHoffman 05 May 2017 09:08:45AM 5 points [-]

Consider paternalism as a proxy for model error rather than an intrinsic dispreference. We should wonder whether maybe the things we do to people are more likely to cause hidden harm or lack supposed benefits, than things they do for examples.

Deworming is an especially stark example. The mass drug administration program is to go to schools and force all the children, whether sick or healthy, to swallow giant poisonous pills that give them bellyaches, because we hope killing the worms in this way buys big life outcome improvements. GiveWell estimates the effect at about 1.5% of what studies say, but EV is still high. This could involve a lot of unnecessary harm too via unnecessary treatments.

By contrast, the less paternalistic Living Goods (a recent GiveWell "standout charity") sells deworming pills (at or near cost) so we should expect better targeting to kids sick with worms, and repeat business is more likely if the pills seem helpful.

I wrote a bit about this here: http://benjaminrosshoffman.com/effective-altruism-not-no-brainer/

Comment author: DonyChristie 07 May 2017 01:30:31AM *  0 points [-]

'Do No Harm - on Problem Solving and Design' talks about fixer solutions vs. solver solutions. Its key points:

  • When solving a problem, are you looking for a fix or are you looking for the cause?
  • For a complex system to be resilient, it must (in any practical sense) be comprised of a collection of simpler, resilient parts.
  • The aim of a resilient part is to normalize (make consistent) the things that matter, and to minimize (dampen or hide) the things that don't.
  • If you can't solve an existing problem, how do you know you aren't causing more?
  • Resilience invariably relies upon feedback loops, and so the variables involved must be free to move through some critical range or the feedback is broken and the resilience is lost.

Its concluding paragraph:

The bottom line is: there is no escaping the need to make best effort to understand the whys and wherefores. To skip this and go straight for the fix is not humbly admitting the system is too complex to understand, it is arrogantly assuming we understand it well enough to fix it without breaking it. The humble thing to do is less, not more--to respect the difficulty in keeping any complex system stable, and the degree to which doing so relies upon a best effort by, and freedom of, each component to self-regulate.

Highly recommended read! :D

Comment author: DonyChristie 02 May 2017 07:00:19AM 3 points [-]

The Life You Can Save should become cause-neutral and recommend effective interventions that are from cause areas other than global poverty (e.g. animal welfare charities like The Humane League, or global catastrophic risk charities like The Ploughshares Fund).

Submitting...

Comment author: Peter_Hurford  (EA Profile) 23 April 2017 07:16:06PM 1 point [-]

I don't understand the question.

Comment author: DonyChristie 24 April 2017 09:38:50PM 0 points [-]

Allocating grants according to a ranked preference vote of an arbitrary amount of people (and having them write up their arguments); what is the optimal number here? Where is the inflection point where adding more people decreases the quality of the grants?

On tertiary reading I somewhat misconstrued "three fund managers" as "three fund managers per fund" rather than "the three fund managers we have right now (Nick, Elie, Lewis)", but the possibility is still interesting with any variation.

Comment author: RyanCarey 23 April 2017 11:08:18PM *  19 points [-]

Some feedback on your feedback (I've only quickly read your post once, so take it with a grain of salt):

  • I think that this is more discursive than it needs to be. AFAICT, you're basically arguing that you think that decision-making and trust in the EA movement is a over-concentrated in OpenPhil.
  • If it was a bit shorter, then it would also be easier to run it by someone involved with OpenPhil, which prima facie would be at least worth trying, in order to correct any factual errors.
  • It's hard to do good criticism, but starting out with long explanations of confidence games and Ponzi schemes is not something that makes the criticism likely to be well-received. You assert that these things are not necessarily bad, so why not just zero in on the thing that you think is bad in this case?
  • So maybe this could have been split into two posts?
  • Maybe there are more upsides of having somewhat concentrated decision-making than you lead on? Perhaps cause prioritization will be better? Since EA funds is a movement-wide scheme, perhaps reputational trust is extra important here, and the diversification would come from elsewhere? Perhaps the best decision-makers will naturally come to work on this full-time.

You may still be right, though I would want some more balanced analysis.

Comment author: DonyChristie 24 April 2017 06:43:58AM 8 points [-]

I enjoyed the SSC-style length and thought it helpful in fully explicating his arguments. :) It may be the case that an artificially-shortened version of the post would not be as listened-to. But perhaps a TL;DR could go at the top.

View more: Prev | Next