17

Buck comments on A Complete Quantitative Model for Cause Selection - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (38)

You are viewing a single comment's thread.

Comment author: Buck 18 May 2016 07:21:38AM 3 points [-]

I disagree with several of the numbers you use.

For example, Globals$B22 is the utility of a developing-world human. I feel like that number is quite tenuous: I think you're using happiness research in a way pretty differently from how it was intended. I would not call that a very robust estimate. (Also, "developing world human" is pretty ambiguous. My speculation is that you mean the level of poverty experienced by the people affected by GiveDirectly and AMF; is that correct?)

I think the probability of death from AI that you give is too low. You've done the easy and defensible and epistemically-virtuous-looking thing of looking at a survey of expert opinion. But I don't think it's actually a very good estimate of the most educated guess, any more than measuring the proportion of Americans who are vegetarian is a good way of getting a good estimate for the probability that chickens have moral value.

What do you mean by "size of FAI community"? If you mean "number of full time AI safety people", I think your estimate is way too high. There are like, maybe 50 AI safety people? So you're estimating that we at least quadruple that? I also don't quite understand the relevance of the link tohttps://intelligence.org/2014/01/28/how-big-is-ai/.

I also have some more general concerns about how you treat uncertainty: I think it plausibly makes some sense that variance in your estimate of chicken sentience should decrease your estimate of effectiveness of cage-free campaigns. I'll argue for this properly at some point in the future.

Great work overall; I'm super excited to see this! This kind of work has a lot of influence on my donations.

Comment author: MichaelDickens  (EA Profile) 18 May 2016 01:37:38PM *  2 points [-]

Thanks for the feedback, Buck.

  1. I updated the description for "developing world human" to be less ambiguous.

  2. I agree that the survey respondents are too optimistic. I thought I had changed the numbers to be more pessimistic but apparently I didn't.

  3. Considering how quickly the AI safety field is growing, I would be surprised if there were fewer than 200 AI safety researchers within 20 years (and it's pretty unlikely that AGI is developed before then). I use "How Big Is AI" here.