WillPearson comments on My current thoughts on MIRI's "highly reliable agent design" work - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread. Show more comments above.

Comment author: WillPearson 09 July 2017 10:07:31AM 0 points [-]

My own 2 cents. It depends a bit what form of general intelligence is made first. There are at least two possible models.

  1. Super intelligent agent with a specified goal
  2. External brain lobe

With the first you need to be able to specify a human preferences in the form of a goal. Which enables it to pick the right actions.

The external brain lobe would start not very powerful and not come with any explicit goals but would be hooked into the human motivational system and develop goals shaped by human preferences.

HRAD is explicitly about the first. I would like both to be explored.

Comment author: JesseClifton 09 July 2017 05:17:07PM *  0 points [-]

Right, I'm asking how useful or dangerous your (1) could be if it didn't have very good models of human psychology - and therefore didn't understand things like "humans don't want to be killed".