oge comments on Prioritization Consequences of "Formally Stating the AI Alignment Problem" - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (5)

You are viewing a single comment's thread. Show more comments above.

Comment author: oge 26 February 2018 07:01:26PM 0 points [-]

Thanks, Gordon.

"Make nice AI people we can believe are nice" makes sense to me; I hadn't been aware of the "...we can believe are nice" requirement.