Comment author: Michael_PJ 26 March 2017 06:35:05PM 1 point [-]

One important thing to remember is that important projects may not look very credible initially. Any early-stage EA funding body needs to ask itself "would we fund an early-stage 80k?".

Comment author: Ben_Todd 26 March 2017 09:12:25PM 3 points [-]

Bear in mind that before we spent any money we had: been involved in 10-20 important plan changes that would already justify significant funding; built a website with thousands of views per month; and received top level press coverage.

Comment author: khaozavr 23 March 2017 01:57:05PM 1 point [-]

An interesting exchange, although I feel like the rebuttal somewhat misrepresents Gabriel's agrument regarding systemic change. A steelman version of his argument would factor in the quantification bias, pointing out that due to extreme uncertainty in expected value estimation for some systemic change type interventions, something like AMF would usually easily come out on top.

I read him as saying that EA community would not support e.g. the abolishionist movement were it around then, precisely because of the difficulties in EV calculations, and I agree with him on that.

(I also think that OpenPhil does very important work in that direction)

Comment author: Ben_Todd 25 March 2017 04:44:40AM 7 points [-]

I read him as saying that EA community would not support e.g. the abolishionist movement were it around then, precisely because of the difficulties in EV calculations, and I agree with him on that.

Just as an aside, I'm not sure that's obvious. John Stuart Mill was a leader in the abolition movement. He was arguably the Peter Singer of those times.

Turning to current issues, ending factory farming is also a cause that likely requires large scale social change through advocacy, and lots of EAs work on that.

Comment author: Ben_Todd 23 March 2017 12:45:32AM *  2 points [-]

Update: We made our target!

More details here: https://groups.google.com/forum/#!topic/80k_updates/Ix8AOdphML8

Comment author: Ben_Todd 28 February 2017 12:54:45AM 6 points [-]

It might also be useful to link to this: https://80000hours.org/problem-profiles/artificial-intelligence-risk/

And we're currently working on a significant update.

Comment author: vipulnaik 25 February 2017 05:38:56AM *  6 points [-]

One point to add: the frustratingly vague posts tend to get FEWER comments than the specific, concrete posts.

From my list, the posts I identified as clearly vague:

http://www.openphilanthropy.org/blog/radical-empathy got 1 comment (a question that hasn't been answered)

http://www.openphilanthropy.org/blog/worldview-diversification got 1 comment (a single sentence praising the post)

http://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing got 6 comments

http://blog.givewell.org/2016/12/22/front-loading-personal-giving-year/ got 8 comments

In contrast, the posts I identified as sufficiently specific (even though they tended on the fairly technical side)

http://blog.givewell.org/2016/12/06/why-i-mostly-believe-in-worms/ got 17 comments

http://blog.givewell.org/2017/01/04/how-thin-the-reed-generalizing-from-worms-at-work/ got 14 comments

http://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms got 27 comments

http://blog.givewell.org/2016/12/12/amf-population-ethics/ got 7 comments

If engagement is any indication, then people really thirst for specific, concrete content. But that's not necessarily in contradiction with Holden's point, since his goal isn't to generate engagement. In fact comments engagement can even be viewed negatively in his framework because it means more effort necessary to respond to and keep up with comments.

Comment author: Ben_Todd 28 February 2017 12:30:31AM *  10 points [-]

Just my rough impression, but I find that controversial or flawed posts get comments, whereas posts that make a solid, concrete, well-argued point tend to not generate much discussion. So I don't think this is a good measure for the value of the post to the community.

Comment author: Ben_Todd 10 February 2017 01:05:28PM 1 point [-]

Also see this paper on the topic: http://escholarship.org/uc/item/1hw1k2ps

Comment author: Telofy  (EA Profile) 08 February 2017 09:37:48PM 1 point [-]

It's fascinating how diverse the movement is in this regard. I've only found a single moral realist EA who had thought about metaethics and could argue for it. Most EAs around me are antirealists or haven't thought about it.

(I'm antirealist because I don't know any convincing arguments to the contrary.)

In response to comment by Telofy  (EA Profile) on Anonymous EA comments
Comment author: Ben_Todd 09 February 2017 10:42:18AM 6 points [-]

My impression is that many of the founders of the movement are moral realists and professional moral philosophers e.g. Peter Singer published a book arguing for moral realism in 2014 ("The Point of View of the Universe").

Comment author: SoerenMind  (EA Profile) 06 February 2017 11:41:46PM *  4 points [-]

An approximate solution is to exploit your best opportunity 90% of the time, then randomly select another opportunity to explore 10% of the time.

This is the epsilon-greedy strategy with epsilon = 0.1, which is probably a good rule of thumb for when one's prior for each of the causes has a slim-tailed distribution (e.g. Gaussian). The optimal value of epsilon increases with the variance in our prior for each of the causes. So if we have a cause and our confidence interval for its cost effectiveness goes over more than an order of magnitude (high variance), a higher value of epsilon could be better. Point is - the rule of thumb doesn't really apply when you think some causes are much better than others and you have plenty of uncertainty.

That said, if you had realistic priors for the effectiveness of each cause, you can calculate an optimal solution using Gittins indeces.

Comment author: Ben_Todd 07 February 2017 01:44:19PM 0 points [-]

Interesting!

Comment author: RyanCarey 05 February 2017 06:25:36PM 7 points [-]

Yes! Probably when we think of Importance, Neglectedness, and Tractability, we should also consider informativeness!

Comment author: Ben_Todd 06 February 2017 03:27:59PM 3 points [-]

We've considered wrapping it into the problem framework in the past, but it can easily get confusing. Informativeness is also more of a feature of how you go about working on the cause, rather than which cause you're focused on.

The current way we show that we think VOI is important is by listing Global Priorities Research as a top area (though I agree that doesn't quite capture it). I also talk about it often when discussing how to coordinate with the EA community (VOI is a bigger factor when considering the community perspective than individual perspective).

Comment author: Ben_Todd 05 February 2017 05:31:16PM 7 points [-]

Thanks for the post. I broadly agree.

There are some more remarks on "gaps" in EA here: https://80000hours.org/2015/11/why-you-should-focus-more-on-talent-gaps-not-funding-gaps/

Two quick additions:

1) I'm not sure spending on RCTs is especially promising. Well-run RCTs that actually have power to update you can easily cost tens of millions of dollars, so you'd need to be considering spending hundreds of millions for it to be worth it. We're only just getting to this scale. GiveWell has considered funding RCTs in the past and rejected it, I think for this reason (though I'm not sure).

2) It might be interesting for someone to think more about multi-arm bandit problems, since it seems like it could be a good analogy for cause selection. An approximate solution is to exploit your best opportunity 90% of the time, then randomly select another opportunity to explore 10% of the time. https://en.wikipedia.org/wiki/Multi-armed_bandit

View more: Next