kbog comments on In defence of epistemic modesty - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (42)

You are viewing a single comment's thread. Show more comments above.

Comment author: kbog  (EA Profile) 30 October 2017 07:39:53AM *  2 points [-]

The difference in many object level claims, like the probability that there will be an intelligence explosion and so on, is not very much between EAs and AI researchers. This survey demonstrated it: https://arxiv.org/abs/1705.08807

AI researchers are just more likely to have an attitude that anything less than ~10% likely to occur should be ignored, or existential risks are not orders of magnitude more important than other things, or similar kinds of judgement calls.

The one major technical issue where EAs might be systematically different from AI researchers would be the validity of current research in addressing the problem.

Comment author: WillPearson 30 October 2017 09:23:34AM 1 point [-]

Is there any data on how likely EAs think that explosive progress after HLMI will happen? I would have thought it more than 10%?

I would also have expected more debate about explosive progress, more than just the recent Hanson-Yudkowski flair up, if there was as much doubt in the community as that survey suggests.