Michael_S comments on An intervention to shape policy dialogue, communication, and AI research norms for AI safety - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread.

Comment author: Michael_S 01 October 2017 09:36:50PM 6 points [-]

It might make a lot of sense to test the risk vs. accidents framing on the next survey of AI researchers.

Comment author: kbog  (EA Profile) 05 October 2017 08:10:20PM *  0 points [-]

You will have to be sure that the researchers actually know what you mean though. AI researchers are already concerned about accidents in the narrow sense, and they could respond positively to the idea of preventing AI accidents merely because they have something else in mind (like keeping self driving cars safe or something like that).

If accept this switch to language that is appealing at the expense of precision then eventually you will reach a motte-and-bailey situation where the motte is the broad idea of 'preventing accidents' and the bailey is the specific long-term AGI scheme outlined by Bostrom and MIRI. You'll get fewer funny looks, but only by conflating and muddling the issues.