All I can say is that if you are going to have machines which can fulfill all the organizational and tactical responsibilities of humans, to create and lead large formations, then they are probably going to have some kind of general intelligence like humans do.
Couldn't it be the case, though, that you have a number of machines that together fulfill all the organizational and tactical responsibilities of humans without having any one of them have general intelligence? Given that humans already function as cogs in a machine (a point you make very well from your experience), this seems very plausible.
In that case, the intelligence could be fairly narrow, and I would think we should not bet too much on the AIs having a moral compass.
Subscribe to RSS Feed
The same can be said for messages which come from non-government sources. Governments have always had an advantage in resources and laws, so they've always had the high ground in information warfare/propaganda, but at the same time dissenting ideas are frequently spread. I don't see why the balance would be shifted.
Likewise, the same reasoning goes for small and independent media and activist groups.
Yeah, it is a problem, though I don't think I would classify it as AI safety. The real issue is one of control and competition. Youtube is effectively a monopoly and Facebook/Twitter are sort of a duopoly, and all of them are in the same Silicon Valley sphere with the same values and goals. Alternatives have little chance of success because of a combination of network effects and the 'Voat Phenomenon' (any alternative platform to the default platform will first attract the extreme types who were the first people to be ostracized by the main platform, so that the alternative platform will forever have a repulsive core community and a tarnished reputation). I'm sure AI can be used as a weapon to either support or dismantle the strength of these institutions; it seems better to approach it from a general perspective than just as an AI one.