Lee_Sharkey comments on An intervention to shape policy dialogue, communication, and AI research norms for AI safety - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lee_Sharkey 03 October 2017 12:58:42PM 0 points [-]

What do you have in mind? If it can't be fixed with better programming, how will they be fixed?

Comment author: kbog  (EA Profile) 05 October 2017 01:59:03AM 0 points [-]

Better decision theory, which is much of what MIRI does, and better guiding philosophy.

Comment author: Lee_Sharkey 06 October 2017 12:32:52PM 0 points [-]

I agree that more of both is needed. Both need to be instantiated in actual code, though. And both are useless if researchers don't care implement them.

I admit I would benefit from some clarification on your point - are you arguing that the article assumes a bug-free AI won't cause AI accidents? Is it the case that this arose from Amodei et al.'s definition?: “unintended and harmful behavior that may emerge from poor design of real-world AI systems”. Poor design of real world AI systems isn't limited to being bug-free, but I can see why this might have caused confusion.

Comment author: kbog  (EA Profile) 06 October 2017 06:39:31PM 0 points [-]

are you arguing that the article assumes a bug-free AI won't cause AI accidents?

I'm not - I'm saying that when you phrase it as accidents then it creates flawed perceptions about the nature and scope of the problem. An accident sounds like a onetime event that a system causes in the course of its performance; AI risk is about systems whose performance itself is fundamentally destructive. Accidents are aberrations from normal system behavior; the core idea of AI risk is that any known specification of system behavior, when followed comprehensively by advanced AI, is not going to work.