Daniel_Eth comments on Ideological engineering and social control: A neglected topic in AI safety research? - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (8)

You are viewing a single comment's thread.

Comment author: Daniel_Eth 03 September 2017 05:56:41AM *  1 point [-]

I'd imagine there are several reasons this question hasn't received as much attention as AGI Safety, but the main reasons are that it's both much lower impact and (arguably) much less tractable. It's lower impact because, as you said, it's not an existential risk. It's less tractable because even if we could figure out a technical solution, there are strong vested interests against applying the solution (as contrasted to AGI Safety, where all vested interests would want the AI to be aligned).

I'd imagine this sort of tech would actually decrease the risk from bioweapons etc for the same reason that I'd imagine it would decrease terrorism generally, but I could be wrong.

Regarding the US in particular, I'm personally much less worried about the corporations pushing their preferred ideologies than them using the tech to manipulate us into buying stuff and watching their media - companies tend to be much more focussed on profits than on pushing ideologies.