8

kbog comments on An intervention to shape policy dialogue, communication, and AI research norms for AI safety - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread.

Comment author: kbog  (EA Profile) 02 October 2017 08:27:26PM -1 points [-]

Meh, that makes it sound too narrowly technical - there are a lot of ways that advanced AI can cause problems, and they don't all fit into the narrow paradigm of a system running into bugs/accidents that can be fixed with better programming.

Comment author: Nick_Robinson 03 October 2017 07:15:09PM 2 points [-]

This seems unnecessarily rude to me, and doesn't engage with the post. For example, I don't see the post anywhere characterising accidents as only coming from bugs in code, and it seems like this dismissal of the phrase 'AI accidents' would apply equally to 'AI risk'.

Comment author: SoerenMind  (EA Profile) 03 October 2017 03:09:41PM 1 point [-]

OpenPhil notion of 'accident risk' more general than yours to describe the scenarios that aren't misuse risk and their term makes perfect sense to me: https://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity

Comment author: kbog  (EA Profile) 05 October 2017 01:58:28AM *  0 points [-]

Yeah, well I don't think we should only be talking about accident risk.

Comment author: Lee_Sharkey 03 October 2017 12:58:42PM 0 points [-]

What do you have in mind? If it can't be fixed with better programming, how will they be fixed?

Comment author: kbog  (EA Profile) 05 October 2017 01:59:03AM 0 points [-]

Better decision theory, which is much of what MIRI does, and better guiding philosophy.

Comment author: Lee_Sharkey 06 October 2017 12:32:52PM 0 points [-]

I agree that more of both is needed. Both need to be instantiated in actual code, though. And both are useless if researchers don't care implement them.

I admit I would benefit from some clarification on your point - are you arguing that the article assumes a bug-free AI won't cause AI accidents? Is it the case that this arose from Amodei et al.'s definition?: “unintended and harmful behavior that may emerge from poor design of real-world AI systems”. Poor design of real world AI systems isn't limited to being bug-free, but I can see why this might have caused confusion.

Comment author: kbog  (EA Profile) 06 October 2017 06:39:31PM 0 points [-]

are you arguing that the article assumes a bug-free AI won't cause AI accidents?

I'm not - I'm saying that when you phrase it as accidents then it creates flawed perceptions about the nature and scope of the problem. An accident sounds like a onetime event that a system causes in the course of its performance; AI risk is about systems whose performance itself is fundamentally destructive. Accidents are aberrations from normal system behavior; the core idea of AI risk is that any known specification of system behavior, when followed comprehensively by advanced AI, is not going to work.