purplepeople comments on Ideological engineering and social control: A neglected topic in AI safety research? - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (8)

You are viewing a single comment's thread.

Comment author: purplepeople 01 September 2017 08:24:30PM 5 points [-]

The last chapter of Global Catastrophic Risks (Bostrom and Circovic) covers global totalitarianism. Among other things they mention how improved lie-detection technology, anti-aging research (to mitigate risks of regime change), and drugs to increase docility in the population could plausibly make a totalitarian system permanent and stable. Obviously an unfriendly AGI could easily do this as well.

Comment author: WillPearson 01 September 2017 09:47:02PM *  0 points [-]

The increasing docility could be a stealth existential risk increaser, in that people would be less willing to challenge other peoples ideas and so slow or stop entirely technological progress we need to save ourselves from super volcanoes and other environmental threats