Hide table of contents

Hi all! First I want to say that I really enjoyed this forum in the past few months, and eventually decided to create an account to post this question. I am still in the process of writing the short version of this question, so thank you for bearing with me in the long version.

As some of you may know, last year we have seen unprecedented uprising against totalitarian regimes. As a Chinese national active in the diaspora dissent community, I have never been more encouraged by the courage and creativity of my people; as an ML practitioner, I am more and more worried about AGI being the most powerful governing tool humanity has seen yet. 

China's Zero-COVID policy gave us a first taste of what this future would feel like - personal location tracking limits your freedom of mobility; a remote "system" will decide what you can or cannot do and you will be informed via an app on your phone; when you try to push back, it is like trying to hit a bureaucratic wall. 

Most importantly, Zero-COVID gave rise to a whole value system that sees society as a simplified trolley problem: the government -- an all-knowing entity -- holds the lever, and will be deciding what is best for the whole. Collectivism is equivalent to altruism, individualism is equivalent to being selfish, and the most honorable thing for an individual to do is to obey. This value system is pretty compelling, and has been pushed into every grade school kid. US's failure and massive death toll is also a convenient gotcha.

Needless to say many people in China do not subscribe to this value, but many people do, and more often than not it is the latter group that are the agent of your day-to-day act of suppression. The policy eventually collapsed partially due to uprising, but even during the height of the uprising there were still significant momentum on the pro-Zero-COVID side for the policy to keep going. My suspicion is what eventually brought down Zero-COVID was the unbearable price tag, especially for local governments. However, I can totally see if COVID happened in 2030 instead of 2020 (1o years are nothing in earth years), the price tag will be much sustainable.

It is no news that 1) AI tend to converge to monopoly, and 2) totalitarian regimes will want to use AI to extend their power. We also know that 3) AI alignment seeks to build the ability for us to embed our values into AI. I deeply worry about the gentle seduction of AI technology in China, seducing us to yield more and more of our agency to an AGI that may align with a value system that represent the interest of the ruling entity, and there will be less and less room for pushing back.

25

0
0

Reactions

0
0
New Answer
New Comment

6 Answers sorted by

Yes, I think this is very scary. I think this kind of risk is at least 10% as important as the AI takeover risks that I work on as an alignment researcher.

I am almost inclined to believe this will bring more eminent ruin to humanity. Xi and Putin are both aging, but it will be likely that they both live long enough to us AI to significantly extend their power and create a more obedient people. Dictators get more paranoid as they age; I am afraid that the combination of this paranoia and their perceived increase in power will encourage them to wage global war.

May I join this diaspora dissent community lol

Off topic but worth it - citizensdailycn on instagram is a great place to start.

1
Ryze
Thanks I'll check it out!

short answer: yes I do worry. 

longer answer: in my view, Baum 2020 is my favorite reading recommendation to kick off a framing of the problem, though it doesn't address totalitarianism in particular nor threats of specific players on the game board. 

Related topic: see Bostrom's vulnerable world hypothesis (VWH), which frames a debate about xrisk as a possible moral case for restricting freedoms. A criticism that floats around is that VWH was irresponsible to publish because a nasty government could point to it as a rationalization for actions they wanted to take anyway. 

Thank you for pointing me to the Baum article! Will take a read.

I do worry about it. Some additional worries I have are 1) if AI is transformative and confers strong first mover advantages, then a private company leading the AGI race could quickly become similarly powerful to a totalitarian government and 2) if the owners of AI depend far less on support from people for their power than today’s powerful organisations, they might be generally less benevolent than today’s powerful organisations

Definitely. The day humanity figured out the alignment problem is also the day CCP gains unlimited intelligence and power. Humanity may avoid extinction, but simultaneously the same innovation drives the world towards global stable totalitarianism. And in fact, the CCP with its sharp focus on power may even be the entity to seriously invest and figure out this problem in the first place. 

Not really? Given my personal impression of the difficulty of the alignment problem, I would consider humanity very lucky if AGI managed to follow any set of human-defined values at all.

Also, it seems that most downsides of totalitarian regimes ultimately boil down to a lower quality of life among citizens. (For instance, a government that suppresses dissent is bad. But dissent is only valuable in that it may lead to reforms of the government, which may lead to improved lives for citizens.) Strong AI, if truly aligned with a government's aims, would probably increase the average person's quality of life to the point where this wouldn't be an issue. (Even totalitarian governments presumably prefer a better quality of life for their citizens, all else equal.)

That is only assuming the totalitarian leaders' interest is mostly aligned with people's interests, which is likely in the short term but a big IF in the long run. I worry that AGI create more obedient citizens because their decisions are almost unchallengeable, and the legitimacy borrowed from AGI will encourage the ruler to take bolder actions. For instance, China and Russia both have aging leaders that might serve until their death; we already know from history that old dictators tend to be more paranoid. 

Comments1
Sorted by Click to highlight new comments since:

I'm worried about this a non-zero amount.

But in the longer run I'm relatively optimistic about most futures where humans survive and continue making decisions. The future will last a very long time, and it's not uncommon for totalitarian governments to liberalize as decades or centuries wear on. Where there is life, there is hope.

Curated and popular this week
Relevant opportunities