Comment author: kbog  (EA Profile) 04 September 2017 08:18:06PM 0 points [-]

inverse reinforcement learning could allow AI systems to learn to model the current preferences and likely media reactions of populations, allowing new AI propaganda systems to pre-test ideological messaging with much more accuracy, shaping gov't 'talking points', policy rationales, and ads to be much more persuasive.

The same can be said for messages which come from non-government sources. Governments have always had an advantage in resources and laws, so they've always had the high ground in information warfare/propaganda, but at the same time dissenting ideas are frequently spread. I don't see why the balance would be shifted.

Likewise, the big US, UK, EU media conglomerates could weaponize AI ideological engineering systems to shape more effective messaging in their TV, movies, news, books, magazines, music, and web sites -- insofar as they have any ideologies to promote.

Likewise, the same reasoning goes for small and independent media and activist groups.

Compared to other AI applications, suppressing 'wrong-think' and promoting 'right-think' seems relatively easy. It requires nowhere near AGI. Data mining companies such as Youtube, Facebook, and Twitter are already using semi-automatic methods to suppress, censor, and demonetize dissident political opinions. And governments have strong incentives to implement such programs quickly and secretly, without any public oversight (which would undermine their utility by empowering dissidents to develop counter-strategies). Near-term AI ideological control systems don't even have to be as safe as autonomous vehicles, since their accidents, false positives, and value misalignments would be invisible to the public, hidden deep within the national security state.

Yeah, it is a problem, though I don't think I would classify it as AI safety. The real issue is one of control and competition. Youtube is effectively a monopoly and Facebook/Twitter are sort of a duopoly, and all of them are in the same Silicon Valley sphere with the same values and goals. Alternatives have little chance of success because of a combination of network effects and the 'Voat Phenomenon' (any alternative platform to the default platform will first attract the extreme types who were the first people to be ostracized by the main platform, so that the alternative platform will forever have a repulsive core community and a tarnished reputation). I'm sure AI can be used as a weapon to either support or dismantle the strength of these institutions; it seems better to approach it from a general perspective than just as an AI one.

Comment author: zdgroff 29 August 2017 09:26:00PM 0 points [-]

All I can say is that if you are going to have machines which can fulfill all the organizational and tactical responsibilities of humans, to create and lead large formations, then they are probably going to have some kind of general intelligence like humans do.

Couldn't it be the case, though, that you have a number of machines that together fulfill all the organizational and tactical responsibilities of humans without having any one of them have general intelligence? Given that humans already function as cogs in a machine (a point you make very well from your experience), this seems very plausible.

In that case, the intelligence could be fairly narrow, and I would think we should not bet too much on the AIs having a moral compass.

Comment author: kbog  (EA Profile) 30 August 2017 04:27:30AM 0 points [-]

If they are narrow in focus, then it might be easier to provide ethical guidance over their scope of operations.

Comment author: kbog  (EA Profile) 30 August 2017 04:18:46AM *  3 points [-]

I don't think there is a difference between a moral duty and an obligation.

In 2015, there were more than 2000 respondents, right? Does this mean EA is getting smaller??

Comment author: turchin 28 August 2017 10:47:15PM 1 point [-]

I think the title may be technically correct but sounds nasty.

For nitpicking side, I would argue that AI weapons problem mostly depends on their level of intelligence. If it is just narrow AI -ok. However, the greater is their intelligence, the greater is the danger and it may reach catastrophic levels before superintelligence will be created.

I would also add that superintelligence created by the military may be perfectly aligned, but still catastrophically dangerous if it is used as a universal weapon against perhaps another military superintelligence. And the first step for not creating military superintelligence - starts from non creating AI weapons.

Comment author: kbog  (EA Profile) 29 August 2017 10:36:33PM *  0 points [-]

I would also add that superintelligence created by the military may be perfectly aligned, but still catastrophically dangerous if it is used as a universal weapon against perhaps another military superintelligence. 

A superintelligence would have the ability and (probably) interest to shape the entire world. Whether it comes from the military, a corporation, or a government, it will have a compelling instrumental motivation to neutralize other superintelligences.

Comment author: Paul_Christiano 29 August 2017 04:50:34AM *  3 points [-]

The two arguments I most often hear are:

  • Cheap autonomous weapons could greatly decrease the cost of ending life---within a decade they could easily be the cheapest form of terrorism by far, and may eventually be the cheapest mass destruction in general. Think insect-sized drones carrying toxins or explosive charges that are lethal if detonated inside the skull.

  • The greater the military significance of AI, the more difficult it becomes for states to share information and coordinate regarding its development. This might be bad news for safety.

Comment author: kbog  (EA Profile) 29 August 2017 09:11:25AM *  2 points [-]

Cheap autonomous weapons could greatly decrease the cost of ending life---within a decade they could easily be the cheapest form of terrorism by far, and may eventually be the cheapest mass destruction in general. Think insect-sized drones carrying toxins or explosive charges that are lethal if detonated inside the skull.

That sounds a lot more expensive than bullets. You can already kill someone for a quarter.

If weaponized insect drones become cheap enough for terrorists to build and use, then counterterrorist organizations will be able to acquire large numbers of potent surveillance tools to find and eliminate their centers of fabrication.

You should have a low prior for new military technologies to alter the balance of power between different types of groups in a specific way. It hasn't happened much in history, because competing groups can take advantage of these technologies too.

The greater the military significance of AI, the more difficult it becomes for states to share information and coordinate regarding its development. This might be bad news for safety.

Of course the military AIs will be kept secret. But the rest of AI work won't be like that. Cruise liners aren't less safe because militaries are secretive about warship design.

Plus, in Bostrom and Armstrong's Race to the Precipice paper, it is shown that uncertainty about other nations' AI capabilities actually makes things safer.

Comment author: Robert_Wiblin 29 August 2017 04:33:31AM 1 point [-]

As Geoffrey suggests below, the 'political economy' (to use the term loosely) of robot armies seems quite bad. See for example the argument here: https://noahpinionblog.blogspot.com/2014/03/the-robot-lords-and-end-of-people-power.html .

Comment author: kbog  (EA Profile) 29 August 2017 09:07:04AM *  1 point [-]

If robots are cheap and effective compared to human soldiers, then the common people can get robots to fight as well.

Remember that F-15s and MRAPs are already far more powerful than anything owned by private citizens, and 600 years ago a man-at-arms was impervious to most peasant weapons. Revolution and civil stability is not about sheer military force.

Comment author: Daniel_Eth 29 August 2017 06:59:02AM 1 point [-]

"The same can be said for humans." - no, that's very much not true. Humans have common sense and can relatively easily think generally in novel situations. Regarding your second point, how would you avoid an arms race to a situation where they are acting in that level? It happened to a large degree with the financial sector, so I don't see why the military sphere would be much different. The amount of time from having limited deployment of autonomous weapons to the military being mostly automated likely would not be very large, especially since an arms race could ensure. And I could imagine catastrophes occurring due to errors in machines simply in "peaceful posture," not to mention that this could be very hard to enforce internationally or even determine which countries were breaking the rules. Having a hard cutoff at not letting machines kill without human approval seems much more prudent.

Comment author: kbog  (EA Profile) 29 August 2017 08:59:30AM *  2 points [-]

"The same can be said for humans." - no, that's very much not true. Humans have common sense and can relatively easily think generally in novel situations.

But humans commonly make mistakes on the battlefield. They have training which doesn't perfectly match real-world situations. And they can behave in very unexpected ways. There is no shortage of cases where human soldiers break the laws of war, engage in fratricide (intentional or not), commit war crimes and do other bad things.

Regarding your second point, how would you avoid an arms race to a situation where they are acting in that level?

Well it's little different from avoiding a race to a situation where AIs are acting on the level of state governors and corporate CEOs; the same general set of cognitive competencies will enable machines to fulfill all of these roles. Several things are possible - AI never outpaces humans in this kind of role, nations agree to maintain human oversight over AI systems at operational or theater levels, or AIs replace humans in all kinds of military leadership roles. I don't think any of these scenarios is necessarily bad. If AI systems will be intelligent enough to run theater commands better than humans can, then they will be intelligent enough to know the difference between a border scare and a real war. If they can make a plan to outfight an opposing military force then they can make a plan to prepare themselves against unnecessary escalations.

The amount of time from having limited deployment of autonomous weapons to the military being mostly automated likely would not be very large, especially since an arms race could ensure.

Why? Automation is not a game changer. Matchlock revolvers were invented in Germany in the 16th century, but it was not until the 19th century that armies had widespread adoption of repeating firearms. Light automatic infantry weapons were developed in WWI but did not become standardized as individual weapons until the early Cold War. The first guided attack missile was developed in WWI, the Kettering Bug, but iron bombs and unguided rockets have still been used widely in military combat in Vietnam and more recent wars. Automatic tank loaders have been around for more than half a century and still are yet to be adopted by the majority of industrialized nations. In the past 70 years since the end of WW2, tank crews have reduced from 6 to 4 and destroyer crews have reduced from 350 to 250. Not very fast.

not to mention that this could be very hard to enforce internationally or even determine which countries were breaking the rules.

We already don't have international rules preventing countries from keeping their forces on high alert on the border. Countries just don't do it because they know that military mobilization and tensions are not to be taken lightly. Having AIs instead of humans wouldn't change this matter.

Having a hard cutoff at not letting machines kill without human approval seems much more prudent.

No such hard cutoff is possible. What does it mean to have human approval to kill - does each potentially lethal shot need to be approved? Or can the human give ordinary fire orders in a tactical situation the way a commander does with his subordinates?

Comment author: geoffreymiller  (EA Profile) 28 August 2017 11:22:19PM 8 points [-]

Fascinating post. I agree that we shouldn't compare LAWs to (a) hypothetical, perfectly consequentialist, ethically coherent, well-trained philosopher-soldiers, but rather to (b) soldiers as the order-following, rules-of-engagement-implementing, semi-roboticized agents they're actually trained to become.

A key issue is the LAWs' chain of commands' legitimacy, and how it's secured.

Mencius Moldbug had some interesting suggestions in Patchwork about how a 'cryptographic chain of command' over LAWs could actually increase the legitimacy and flexibility of governance over lethal force. https://www.amazon.com/dp/B06XG2WNF1

Suppose a state has an armada/horde/flock of formidable LAWS that can potentially destroy or pacify the civilian populace -- an 'invincible robot army'. Who is permitted to issue orders? If the current political leader is voted out of office, but they don't want to leave, and they still have the LAWS 'launch codes', what keeps them from using LAWS to subvert democracy? In a standard human-soldier/secret service agent scenario, the soldiers and agents have been socialized to respect the outcomes of democratic elections, and would balk at defending the would-be dictator. They would literally escort him/her out of the White House. In the LAWs scenario, the soldiers/agents would be helpless against local LAWs under the head of state. The robot army would escort the secret service agents out of the White House until they accept the new dictator.

In other words, I'm not as worried about interstate war or intrastate protests; I'm worried about LAWs radically changing the incentives and opportunities for outright dictatorship. Under the Second Amendment, the standard countervailing force against dictatorship is supposed to be civilian ownership of near-equivalent tech that poses a credible threat against dictatorial imposition of force. But in this invincible-robot-army scenario, that implies civilians would need to be able to own and deploy LAWs too, either individually (so they can function as aggrieved tyrant-assassins) or collectively (so they can form revolutionary militias against gov't LAWs).

I guess this is just another example of an alignment problem - in this case between the LAWs and the citizens, with the citizens somehow able to collectively over-rule a dictator's 'launch codes'. Maybe every citizen has their own crypto key, and they do some kind of blockchain vote system about what the LAWs do and who they obey. This then opens the way to a majoritarian mob rule with LAWs forcibly displacing/genociding targeted minorites -- or the LAWs must embody some 'human/constitutional rights interrupts' that prevent such bullying.

Any suggestions on how to solve this 'chain of command' problem?

Comment author: kbog  (EA Profile) 29 August 2017 02:39:49AM *  2 points [-]

All I can say is that if you are going to have machines which can fulfill all the organizational and tactical responsibilities of humans, to create and lead large formations, then they are probably going to have some kind of general intelligence like humans do. That means we can expect and demand that they have a decent moral compass.

But in this invincible-robot-army scenario, that implies civilians would need to be able to own and deploy LAWs too, either individually (so they can function as aggrieved tyrant-assassins) or collectively (so they can form revolutionary militias against gov't LAWs).

We don't have civilian tanks or civilian fighter jets or lots of other things. Revolutions are almost always asymmetric.

Comment author: Daniel_Eth 29 August 2017 12:51:42AM 3 points [-]

"I don't know what reason there is to expect a loss in stability in tense situations; if militaries decide that machines are competent enough to replace humans in battlefield decision making, then they will probably be at least as good at avoiding errors."

I very much disagree with that. AI and similar algorithms tend to work quite well... until they don't. Often times assumptions are programmed into them which don't always hold, or the training data doesn't quite match the test data. It's probably the case that automated weapons would greatly decrease minor errors, but they could greatly increase the chance of a major error (though this rate might still be small). Consider the 2010 flash crash - the stock market dropped around 10% within minutes, then less than an hour later it bounced back. Why? Because a bunch of algorithms did stuff that we don't really understand while operating under slightly different assumptions than what happened in real life. What's the military equivalent of the flash crash? Something like a bunch of autonomous weapons in US and China starting all out war over some mistake, then stopping just as soon as it started, yet with 100M people dead. The way to avoid this sort of problem is to maintain human oversight, and the best place to draw the line is probably at the decision to kill. Partially autonomous weapons (where someone remotely has to make a decision to kill, or at least approve the decision) could provide almost all the benefit of fully autonomous weapons - including greatly reduced collateral damage - yet would not have the same risk of possibly leading to a military flash crash.

Comment author: kbog  (EA Profile) 29 August 2017 02:36:12AM *  2 points [-]

I very much disagree with that. AI and similar algorithms tend to work quite well... until they don't. Often times assumptions are programmed into them which don't always hold, or the training data doesn't quite match the test data.

The same can be said for humans. And remember that we are looking at AI systems conditional upon them being effective enough to replace people on the battlefield. If they make serious errors much more frequently than people do, then it's unlikely that the military will want to use them.

Something like a bunch of autonomous weapons in US and China starting all out war over some mistake, then stopping just as soon as it started, yet with 100M people dead.

That requires automation not just at the tactical level, but all the way up to the theatre level. I don't think we should have AIs in charge of major military commands, but that's kind of a different issue, and it's not going to happen anytime soon. Plus, it's easy enough to control whether machines are in a defensive posture, offensive posture, peaceful posture, etc. We already have to do this with manned military units.

Comment author: Lee_Sharkey 28 August 2017 06:18:19PM *  4 points [-]

Hey kbog, Thanks for this. I think this is well argued. If I may, I'd like to pick some holes. I'm not sure if they are sufficient to swing the argument the other way, but I don't think they're trivial either.

I'm going to use autonomy in weapons systems in favour of LAWs for reasons argued here(see Takeaway 1).

As far as I can tell, almost all considerations you give are to inter-state conflict. The intra-state consequences are not explored and I think they deserve to be. Fully autonomous weapons systems potentially obviate the need for a mutually beneficial social contract between the regimes in control of the weapons and the populations over which they rule. All dissent becomes easy to crush. This is patently bad in itself, but it also has consequences for interstate conflict; with less approval needed to go to war, inter-state conflict may increase.

The introduction of weapons systems with high degrees of autonomy poses an arguably serious risk of geopolitical turbulence: it is not clear that all states will develop the capability to produce highly autonomous weapons systems. Those that do not will have to purchase them from technologically-more advanced allies willing to sell them. States that find themselves outside of such alliances will be highly vulnerable to attack. This may motivate a nontrivial reshuffling of global military alliances, the outcomes of which are hard to predict. For those without access to these new powerful weapons, one risk mitigation strategy is to develop nuclear weapons, potentially motivating nuclear proliferation.

On your point:

The logic here is a little bit gross, since it's saying that we should make sure that ordinary soldiers like me die for the sake of the greater good of manipulating the political system and it also implies that things like body armor and medics should be banned from the battlefield, but I won't worry about that here because this is a forum full of consequentialists and I honestly think that consequentialist arguments are valid anyway.

My argument here isn't hugely important but I take some issue with the analogies. I prefer thinking in terms of both actors agreeing on acceptable level of vulnerability in order to reduce the risk of conflict. In this case, a better analogy is to the Cold War agreement not to build comprehensive ICBM defenses, an analogy which would come out in favour of limiting autonomy in weapons systems. But neither of us are placing much importance on this point overall.

I'd like to unpack this point a little bit:

Third, you might say that LAWs will prompt an arms race in AI, reducing safety. But faster AI development will help us avoid other kinds of risks unrelated to AI, and it will expedite humanity's progress and expansion towards a future with exponentially growing value. Moreover, there is already substantial AI development in civilian sectors as well as non-battlefield military use, and all of these things have competitive dynamics. AGI would have such broad applications that restricting its use in one or two domains is unlikely to make a large difference; after all, economic power is the source of all military power, and international public opinion has nontrivial importance in international relations, and AI can help nations beat their competitors at both.

I believe discourse on AI risks often conflates 'AI arms race' with 'race to the finish'. While these races are certainly linked, and therefore the conflation justified in some senses, I think it trips up the argument in this case. In an AI arms race, we should be concerned about the safety of non-AGI systems, which may be neglected in an arms race scenario. This weakens the argument that highly autonomous weapons systems might lead to fewer civilian casualties, as this is likely the sort of safety measure that might be neglected when racing to develop weapons systems capable of out-doing the ever more capable weapons of one's rival.

The second sentence only holds if the safety issue is solved, so I don't accept the argument that it will help humanity reach a future exponentially growing in value (at least insofar as we're talking about the long run future, as there may be some exponential progress in the near-term).

It could simply be my reading, but I'm not entirely clear on the point made across the third and fourth sentences, and I don't think they give a compelling case that we shouldn't try to avoid military application or avoid exacerbating race dynamics.

Lastly, while I think you've given a strong case to soften opposition to advancing autonomy in weapons systems, the argument against any regulation of these weapons hasn't been made. Not all actors seek outright bans, and I think it'd be worth acknowledging that (contrary to the title) there are some undesirable things with highly autonomous weapons systems and that we should like to impose some regulations on them such as, for example, some minimum safety requirements that help reduce civilian casualties.

Overall, I think the first point I made should cause serious pause, and it's the largest single reason I don't agree with your overall argument, as many good points as you make here.

(And to avoid any suspicions: despite arguing on his side, coming from the same city, and having the same rare surname, I am of no known relation to Noel Sharkey of the Stop Killer Robots Campaign, though I confess a pet goal to meet him for a pint one day.)

Comment author: kbog  (EA Profile) 28 August 2017 10:09:41PM *  2 points [-]

Hmm, everything that I mentioned applies to interstate conflict, but they don't all only apply to interstate conflict. Intrastate conflicts might be murkier and harder to analyze, and I think they are something to be looked at, but I'm not sure how much it would modify the main points. The assumptions of the expected utility theory of conflict do get invalidated.

Fully autonomous weapons systems potentially obviate the need for a mutually beneficial social contract between the regimes in control of the weapons and the populations over which they rule. All dissent becomes easy to crush.

Well, firstly, I am of the opinion that most instances of violent resistance against governments in history were unjustified, and that a general reduction in revolutionary violence would do more good than harm. Peaceful resistance is more effective at political change than violent resistance anyway (https://www.psychologytoday.com/blog/sex-murder-and-the-meaning-life/201404/violent-versus-nonviolent-revolutions-which-way-wins). You could argue that governments will become more oppressive and less responsive to peaceful resistance if they have better security against hypothetical revolutions, though I don't have a large expectation for this to happen, at least in the first world.

Second, this doesn't have much to do with autonomous weapons in particular. It applies to all methods by which the government can suppress dissent, all military and police equipment.

Third, lethal force is a small and rare part of suppressing protests and dissent as long as full-fledged rebellion doesn't break out. Modern riot police are equipped with nonlethal weapons; we can expect that any country with the ability to deploy robots would have professional capabilities for riot control and the deployment of nonlethal weapons. And crowd control is based more on psychology and appearances than application of kinetic force.

Finally, even when violent rebellion does break out, nonstate actors such as terrorists and rebels are outgunned anyway. Governments trying to pacify rebellions need to work with the local population, gather intelligence, and assert their legitimacy in the eyes of the populace. Lethal autonomous weapons are terrible for all of these things. They would be very good for the application of quick precise firepower at low risk to friendly forces, but that is far from the greatest problem faced by governments seeking to suppress dissent.

The one thing that implies that rebellion would become less frequent in a country with LAWs is that an army of AGI robots could allow leadership to stop a rebellion without worrying about the loyalty of police and soldiers. By that time, probably we should just make sure that machines have ethical guidelines not to kill their own people, support evil governments and similar things. I can see this being a problem, but it's a little too far out and speculative to make plans around it.

This is patently bad in itself, but it also has consequences for interstate conflict; with less approval needed to go to war, inter-state conflict may increase.

The opposite is at least as likely. Nations often go to war in order to maintain legitimacy in the eyes of the population. Argentina's Falklands venture was a good example of this 'diversionary foreign policy' (https://en.wikipedia.org/wiki/Diversionary_foreign_policy).

The introduction of weapons systems with high degrees of autonomy poses an arguably serious risk of geopolitical turbulence: it is not clear that all states will develop the capability to produce highly autonomous weapons systems. Those that do not will have to purchase them from technologically-more advanced allies willing to sell them. States that find themselves outside of such alliances will be highly vulnerable to attack. This may motivate a nontrivial reshuffling of global military alliances, the outcomes of which are hard to predict.

How would AI be any different here from other kinds of technological progress? And I don't think that the advent of new military technology has major impacts on geopolitical alliances. I actually cannot think of a case where alliances shifted because of new military technology. Military exports and license production are common among non-allies, and few alliances lack advanced industrial powers; right now there are very few countries in the world which are not on good enough terms with at least one highly developed military power to buy weapons from them.

In an AI arms race, we should be concerned about the safety of non-AGI systems, which may be neglected in an arms race scenario. This weakens the argument that highly autonomous weapons systems might lead to fewer civilian casualties, as this is likely the sort of safety measure that might be neglected when racing to develop weapons systems capable of out-doing the ever more capable weapons of one's rival.

But the same dynamic is present when nations compete with non-AI weapons. The demand for potent firepower implies that systems will cause collateral damage and that soldiers will not be as trained or disciplined on ROE as they could be.

The second sentence only holds if the safety issue is solved, so I don't accept the argument that it will help humanity reach a future exponentially growing in value (at least insofar as we're talking about the long run future, as there may be some exponential progress in the near-term).

Well, of course nothing matters if there is an existential catastrophe. But you can't go into this with the assumption that AI will cause an existential catastrophe. It likely won't, and in all those scenarios, quicker AI development is likely better. Does this mean that AI should be developed quicker, all-things-considered? I don't know, I'm just saying that overall it's not clear that it should be developed more slowly.

It could simply be my reading, but I'm not entirely clear on the point made across the third and fourth sentences, and I don't think they give a compelling case that we shouldn't try to avoid military application or avoid exacerbating race dynamics.

I just mean that military use is a comparatively small part of the overall pressure towards quicker AI development.

Lastly, while I think you've given a strong case to soften opposition to advancing autonomy in weapons systems, the argument against any regulation of these weapons hasn't been made. Not all actors seek outright bans, and I think it'd be worth acknowledging that (contrary to the title) there are some undesirable things with highly autonomous weapons systems and that we should like to impose some regulations on them such as, for example, some minimum safety requirements that help reduce civilian casualties.

There are things that are wrong with AI weapons in that they are, after all, weapons, and there is always something wrong with weapons. But I think there is nothing that makes AI weapons overall worse than ordinary ones.

I don't think that regulating them is necessarily bad. I did say at the end that testing, lobbying, international watchdogs, etc are the right direction to go in. I haven't thought this through, but my first instinct is to say that autonomous systems should simply follow all the same regulations and laws that soldiers do today. Whenever a nation ratifies an international treaty on military conduct, such as the Geneva Convention, its norms should apply to autonomous systems as well as soldiers. That sounds sufficient to me, at first glance.

View more: Next