33

Gregory_Lewis comments on Four Organizations EAs Should Fully Fund for 2018 - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread. Show more comments above.

Comment author: Gregory_Lewis 19 December 2017 12:38:18AM 2 points [-]

[Note: I work on existential risk reduction]

Although I laud posts like the OP, I'm not sure I understand this approach to uncertainty.

I think a lot turns on what you mean by the AI cause area being "Plausibly better" than global poverty or animal welfare on EV. The Gretchenfrage seems to be this conditional forecast: "If I spent (lets say) 6 months looking at the AI cause area, would I expect to identify better uses of marginal funding in this cause area than those I find in animal welfare and global poverty?"

If the answer is "plausibly so, but probably not" (either due to a lower 'prima facie' central estimate, or after pricing in regression to the mean etc.), then I understand the work uncertainty is doing here (modulo the usual points about VoI): one can't carefully look at everything, and one has to make some judgments on what cause areas look most promising to investigate on current margins.

Yet if the answer is "Probably, yes", then offering these recommendations simpliciter (i.e. "EA should fully fund this") seems premature to me. The evaluation is valuable, but should be presented with caveats like, "Conditional on thinking global poverty is the best cause area, fund X; conditional on thinking animal welfare is the best cause area, fund Y (but, FWIW, I believe AI is the best cause area, but I don't know what to fund within it)." It would also lean against making ones own donations to X, Y etc., rather than spending time thinking about it/following the recommendations of someone one trusts to make good picks in the AI cause area.

Comment author: Peter_Hurford  (EA Profile) 19 December 2017 02:10:49AM 3 points [-]

If the answer is "plausibly so, but probably not" (either due to a lower 'prima facie' central estimate, or after pricing in regression to the mean etc.)

This is what captures my views best right now.

Comment author: VinceB 31 December 2017 08:34:21PM *  0 points [-]

To attempt to complement what Peter already said,

: one can't carefully look at everything, and one has to make some judgments on what cause areas look most promising to investigate on current margins.

This is why EA rarely falls into what can accurately be described as a "streetlight effect". We aren't looking for one set of keys, we're looking for a bunch of keys (threats to human welfare) and theres a bunch of us drunkards, all with differing abilities and expertise. So I'd argue if its dark somewhere, those with the expertise need to start building streetlights, but if the lights getting brighter in certain areas (RCTs in health) then we need people there too.