Brian_Tomasik comments on Some considerations for different ways to reduce x-risk - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread. Show more comments above.

Comment author: Brian_Tomasik 05 February 2016 01:27:51PM *  1 point [-]

It's that absent some device for fixing them in society, the (expected) impact of most societal changes decays over time.

AGI is plausibly such a device. MIRI and Bostrom seem to place reasonable probability on a goal-preserving superintelligence (since goal preservation is a basic AI drive). AGI could preserve values more thoroughly than any human institutions possibly can, since worker robots can be programmed not to have goal drift, and in a singleton scenario without competition, evolutionary pressures won't select for new values.

So it seems the values that people have in the next centuries could matter a lot for the quality of the future from now until the stars die out, at least in scenarios where human values are loaded to a nontrivial degree into the dominant AGI(s).