6

FeepingCreature comments on Strategic implications of AI scenarios - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (5)

You are viewing a single comment's thread.

Comment author: FeepingCreature 30 June 2017 11:42:36AM 3 points [-]

But of course, I cannot justify high confidence in these views given that many experts disagree. Following the analysis of this post, this is

Dangling sentence.

In my personal belief, the "hard AI takeoff" scenarios are driven mostly by the belief that current AI progress largely flows from a single skill, that is, "mathematics/programming". So while AI will continue to develop at disparate rates and achieve superhuman performance in different areas at different rates, an ASI takeoff will be driven almost entirely by AI performance in software development, and once AI becomes superhuman in this skill it will rapidly become superhuman in all skills. This seems obvious to me, and I think disagreements with it have to rest largely with hidden difficulties in "software development", such as understanding and modeling many different systems well enough to develop algorithms specialized for them (which seems like it's almost circularly "AGI complete").

Comment author: Brian_Tomasik 02 July 2017 07:36:08AM 2 points [-]

disagreements with it have to rest largely with hidden difficulties in "software development", such as understanding and modeling many different systems well enough to develop algorithms specialized for them (which seems like it's almost circularly "AGI complete").

What do you make of that objection? (I agree with it. I think programming efficiently and flexibly across problem domains is probably AGI-complete.)

Comment author: turchin 01 July 2017 05:56:33PM 1 point [-]

Does it mean that we could try to control AI by preventing its to know anything about programming?

And on the other side, any AI which is able to write code should be regarded extremely dangerous, no matter how low its abilities in other domains?

Comment author: Daniel_Eth 19 July 2017 04:10:31AM *  0 points [-]

My 2 cents: math/ programming is only half the battle. Here's an analogy - you could be the best programmer in the world, but if you don't understand chess, you can't program a computer to beat a human at chess, and if you don't understand quantum physics, you can't program a computer to simulate matter at the atomic scale (well, not using ab initio methods anyway).

In order to get an intelligence explosion, a computer would have to not only have great programming skills, but also really understand intelligence. And intelligence isn't just one thing - it's a bunch of things (creativity, memory, planning, social skills, emotional skills etc and these can be subdivided further into different fields like physics, design, social understanding, social manipulation etc). I find it hard to believe that the same computer would go from not superhuman to superhuman in almost all of these all at once. Obviously computers outcompete humans in many of these already, but I think even on the more "human" traits and in areas where computer act more like agents than just like tools, it's still more likely to happen in several waves instead of just one takeoff.