Paul_Christiano comments on What Should the Average EA Do About AI Alignment? - Effective Altruism Forum
posted by
on
25 February 2017 08:07PM
You are viewing a comment permalink. View the original post to see all comments and the full post content.
= 1ffb9810742e55d426cb37191bcb1f2a)
Subscribe to RSS Feed
Comments (39)
The difficulty of the policy problem depends on the quality of our technical solutions: how large an advantage can you get by behaving unsafely? If the answer is "you get big advantages for sacrificing safety, and a small group behaving unsafely could cause a big problem" then we have put ourselves in a sticky situation and will need to conjure up some unusually effective international coordination.
A perfect technical solution would make the policy problem relatively easy---if we had a scalable+competitive+secure solution to AI control, then there would be minimal risk from reckless actors. On the flip side, a perfect policy solution would make the technical problem relatively easy since we could just collectively decide not to build any kind of AI that could cause trouble. In reality we are probably going to need both.
(I wrote about this here.)
You could hold the position that the advantages from building uncontrolled AI will predictably be very low even without any further work. I disagree strongly with that and think that it contradicts the balance of public argument, though I don't know if I'd call it "easily corrected."