Comment author: Ben_Todd 10 February 2017 01:05:28PM 1 point [-]

Also see this paper on the topic: http://escholarship.org/uc/item/1hw1k2ps

Comment author: Telofy  (EA Profile) 08 February 2017 09:37:48PM 0 points [-]

It's fascinating how diverse the movement is in this regard. I've only found a single moral realist EA who had thought about metaethics and could argue for it. Most EAs around me are antirealists or haven't thought about it.

(I'm antirealist because I don't know any convincing arguments to the contrary.)

In response to comment by Telofy  (EA Profile) on Anonymous EA comments
Comment author: Ben_Todd 09 February 2017 10:42:18AM 6 points [-]

My impression is that many of the founders of the movement are moral realists and professional moral philosophers e.g. Peter Singer published a book arguing for moral realism in 2014 ("The Point of View of the Universe").

Comment author: SoerenMind  (EA Profile) 06 February 2017 11:41:46PM *  4 points [-]

An approximate solution is to exploit your best opportunity 90% of the time, then randomly select another opportunity to explore 10% of the time.

This is the epsilon-greedy strategy with epsilon = 0.1, which is probably a good rule of thumb for when one's prior for each of the causes has a slim-tailed distribution (e.g. Gaussian). The optimal value of epsilon increases with the variance in our prior for each of the causes. So if we have a cause and our confidence interval for its cost effectiveness goes over more than an order of magnitude (high variance), a higher value of epsilon could be better. Point is - the rule of thumb doesn't really apply when you think some causes are much better than others and you have plenty of uncertainty.

That said, if you had realistic priors for the effectiveness of each cause, you can calculate an optimal solution using Gittins indeces.

Comment author: Ben_Todd 07 February 2017 01:44:19PM 0 points [-]

Interesting!

Comment author: RyanCarey 05 February 2017 06:25:36PM 7 points [-]

Yes! Probably when we think of Importance, Neglectedness, and Tractability, we should also consider informativeness!

Comment author: Ben_Todd 06 February 2017 03:27:59PM 3 points [-]

We've considered wrapping it into the problem framework in the past, but it can easily get confusing. Informativeness is also more of a feature of how you go about working on the cause, rather than which cause you're focused on.

The current way we show that we think VOI is important is by listing Global Priorities Research as a top area (though I agree that doesn't quite capture it). I also talk about it often when discussing how to coordinate with the EA community (VOI is a bigger factor when considering the community perspective than individual perspective).

Comment author: Ben_Todd 05 February 2017 05:31:16PM 7 points [-]

Thanks for the post. I broadly agree.

There are some more remarks on "gaps" in EA here: https://80000hours.org/2015/11/why-you-should-focus-more-on-talent-gaps-not-funding-gaps/

Two quick additions:

1) I'm not sure spending on RCTs is especially promising. Well-run RCTs that actually have power to update you can easily cost tens of millions of dollars, so you'd need to be considering spending hundreds of millions for it to be worth it. We're only just getting to this scale. GiveWell has considered funding RCTs in the past and rejected it, I think for this reason (though I'm not sure).

2) It might be interesting for someone to think more about multi-arm bandit problems, since it seems like it could be a good analogy for cause selection. An approximate solution is to exploit your best opportunity 90% of the time, then randomly select another opportunity to explore 10% of the time. https://en.wikipedia.org/wiki/Multi-armed_bandit

Comment author: Ben_Todd 05 February 2017 05:13:40PM 3 points [-]

Just a quick note to anyone considering doing this: the relationship between country economic growth and equity returns is really weak.

So I doubt doing something like buying Chinese equities to hedge against increased meat consumption would really work. You'd need to find more exposed bets, like Chinese meat companies, though that will be more costly in terms of lost diversification. The top US AI tech company example seems better.

http://www.economist.com/blogs/buttonwood/2014/02/growth-and-markets

Comment author: Larks 29 January 2017 04:13:11PM 7 points [-]

If you want to get these charities taken off of our article during next year's giving season, then you'd need to speak with Chloe.

In general the EA movement has an admirable history of public cost-benefit analysis of different groups, which 80k has supported and should continue to do so. But in this instance 80k is instead deferring to the opinion of a single expert who has provided only the most cursory of justification. It's true that 80k isn't responsible for what Chloe says, but 80k is responsible for the choice to defer to her on the subject. And the responsibility is even greater if you present her work as representing the views of the effective altruism movement.

Comment author: Ben_Todd 30 January 2017 10:32:57PM 9 points [-]

Our post is just a summary of where trustworthy EAs recommend donating in the Dec giving season, which seems like a useful exercise that no-one else had done. It's clearly flagged that that's all it is - we list all the sources we drew on, and note that some recommendations had more support than others. Chloe is as an Open Phil grant officer who does full-time research into where to give and is in charge of tens of millions of dollars of funding per year, so clearly earns a place as a trustworthy EA, and probably has a better claim than many of the other people we included.

Comment author: Ben_Todd 13 January 2017 04:43:01PM 3 points [-]

Hey Peter,

Quick comments on the value of a vote stuff.

First, the figures in our post should not be taken as "estimates of the value of a vote". Rather, we point to various ways you could make such an estimate, and show that with plausible assumptions, you get very high figures. We're not saying these are the figures we believe.

Second, the figures were in terms of "US social value", which can be understood as something like "the value of making a random American $1 wealthier.

You seem to be measuring the value of your time in "GiveWell dollars" i.e. the value of donations to top recommended GiveWell charities.

To convert between the two is tricky, but it's something like:

  • How much better is it to make the global poor wealthier vs. Americans (suppose 30x)
  • How much better is SCI than cash transfers? (suppose 5x)

In total that gives you 150x difference.

So $1m of US social value ~ $6700 GiveWell dollars.

Comment author: jsteinhardt 12 January 2017 07:19:44PM 19 points [-]

I strongly agree with the points Ben Hoffman has been making (mostly in the other threads) about the epistemic problems caused by holding criticism to a higher standard than praise. I also think that we should be fairly mindful that providing public criticism can have a high social cost to the person making the criticism, even though they are providing a public service.

There are definitely ways that Sarah could have improved her post. But that is basically always going to be true of any blog post unless one spends 20+ hours writing it.

I personally have a number of criticisms of EA (despite overall being a strong proponent of the movement) that I am fairly unlikely to share publicly, due to the following dynamic: anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

While I'm sympathetic to the fact that there's also a lot of low-quality / lazy criticism of EA, I don't think responses that involve setting a high bar for high-quality criticism are the right way to go.

(Note that I don't think that EA is worse than is typical in terms of accepting criticism, though I do think that there are other groups / organizations that substantially outperform EA, which provides an existence proof that one can do much better.)

Comment author: Ben_Todd 12 January 2017 09:17:08PM 8 points [-]

though I do think that there are other groups / organizations that substantially outperform EA, which provides an existence proof that one can do much better

Interesting. Which groups could we learn the most from?

Comment author: capybaralet 07 January 2017 02:05:59AM 1 point [-]

Do you have any info on how reliable self-reports are wrt counterfactuals about career changes and EWWC pledging?

I can imagine that people would not be very good at predicting that accurately.

Comment author: Ben_Todd 11 January 2017 09:25:30PM 0 points [-]

Hi there,

It's definitely hard for people to estimate.

When we "impact rate" the plan changes, we also try to make an initial assessment of how much is counterfactually due to us (as well as how much extra impact results non-counterfactually adjusted).

We then to more in-depth analysis of the counterfactuals in crucial cases. Because we think the impact of plan changes it fat tailed, if we can understand the top 5% of them, we get a reasonable overall picture. We do this analysis in documents like this: https://80000hours.org/2016/12/has-80000-hours-justified-its-costs/

Each individual case is debateable, but I think there's a large enough volume of cases now to justify that we're having a substantial impact.

View more: Next