by kbog
Aug 28 20179 min read 22

17

By Kyle Bogosian

With all the recent worries over AI risks, a lot of people have raised fears about lethal autonomous weapons (LAWs) which take the place of soldiers on the battlefield. Specifically, in the news recently: Elon Musk and over 100 experts requested that the UN implement a ban. https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war

However, we should not dedicate efforts towards this goal. I don't know if anyone in the Effective Altruist community has, but I have seen many people talk about it, and I have seen FLI dedicate nontrivial effort towards aggregating and publishing views against the use of LAWs. I don't think we should be engaged in any of these activities to try and stop the implementation of LAWs, so first I will answer worries about the dangers of LAWs, and then I will point out a benefit.

The first class of worries is that it is morally wrong to kill someone with an LAW - specifically, that it is more morally wrong than killing someone in a different way. These nonconsequentialist arguments hold that the badness of death has something to do with factors other than the actual suffering and deprivation caused to the victim, the the victim's family, or society at large. There is a lot of philosophical literature on this issue, generally relating to the idea that machines don't have the same agency, moral responsibility, or moral judgement that humans do, or something of the sort. I'm going to mostly assume that people here aren't persuaded by these philosophical arguments in the first place, because this is a lazy forum post, it would take a lot of time to read and answer all the arguments on this topic, and most people here are consequentialists.

I will say one thing though, which hasn't been emphasized before and undercuts many of the arguments alleging that death by AI is intrinsically immoral: in contrast to the typical philosopher's abstract understanding of killing in war, soldiers do not kill after some kind of pure process of ethical deliberation which demonstrates that they are acting morally. Soldiers learn to fight as a mechanical procedure, with the motivation of protection and success on the battlefield, and their ethical standard is to follow orders as long as those orders are lawful. Infantry soldiers often don't target individual enemies; rather, they lay down suppressive fire upon enemy positions and use weapons with a large area of effect, such as machine-guns and grenades. They don't think about each kill in ethical terms, they just memorize their Rules Of Engagement, which is an algorithm that determines when you can or can't use deadly force upon another human. Furthermore, military operations involve the use of large systems where there it is difficult to determine a single person who has the responsibility for a kinetic effect. In artillery bombardments for instance, an officer in the field will order his artillery observer to make a request for support or request it himself based on an observation of enemy positions which may be informed by prior intelligence analysis done by others. The requested coordinates are checked by a fire direction center for avoidance of collateral damage and fratricide, and if approved then the angle for firing is relayed to the gun line. The gun crews carry out the request. Permissions and procedures for this process are laid out beforehand. At no point does one person sit down and carry out philosophical deliberation on whether the killing is moral - it is just a series of people doing their individual jobs making sure that a bunch of things are being done correctly. The system as a whole looks just as grand and impersonal as automatic weaponry does. (I speak from experience, having served on a field artillery unit.)

When someone in the military screws up and gets innocents killed, the blame often falls upon the commander who had improper procedures in place, not some individual who lost his moral compass. This implies that there is no problem with the attribution of responsibility for an LAW screwing up: it will likewise go to the engineer/programmer who had improper procedures in place. So if killing by AI is immoral because of the lack of individual moral responsibility or the lack of moral deliberation, then killing by soldiers is not really any better and we shouldn't care about replacing one with the other.

So, on we go to the consequential harms of LAWs.

First, there is the worry that it will make war more frequent, since nations don't have to worry about losing soldiers, thereby increasing civilian deaths. This worry is attributed to unnamed experts in the Guardian article linked above. The logic here is a little bit gross, since it's saying that we should make sure that ordinary soldiers like me die for the sake of the greater good of manipulating the political system and it also implies that things like body armor and medics should be banned from the battlefield, but I won't worry about that here because this is a forum full of consequentialists and I honestly think that consequentialist arguments are valid anyway.

But the argument assumes that the loss of machines is not an equal cost to governments. If nations are indifferent to whether their militaries have soldiers or equally competent machines, then the machines have the same cost as soldiers, so there will be no difference in the expected utility of warfare. If machine armies are better than human soldiers, but also more expensive overall, and nations just care about security and economic costs, then it seems that nations will go to war less frequently, in order to preserve their expensive and better-than-human machines. However, you might believe (with good reason) that nations respond disproportionately to the loss of life on the battlefield, will go to great lengths to avoid it, and will end up with a system that enables them to go to war for less overall cost.

Well, in undergrad I wrote a paper on the expected utility of war (https://docs.google.com/document/d/1eGzG4la4a96ueQl-uJD03voXVhsXLrbUw0UDDWbSzJA/edit?usp=sharing). Assuming Eckhardt (1989)'s figure of the civilian casualty ratio (https://en.wikipedia.org/wiki/Civilian_casualty_ratio) being 50%, I found that proliferation of robots on the battlefield would only increase total casualties if nations considered the difference between the loss of human armies in wartime and the loss of comparable machines in wartime to be more than 1/3 of the total costs of war. Otherwise, robots on the battlefield would decrease total casualties. It seems to me like it could go either way, particularly with robot weapons having a more positive impact in wars of national security and a more negative impact in wars of foreign intervention and peacekeeping. While I can't demonstrate that robotic weapons will reduce the total amount of death and destruction caused by war, there is not a clear case that robot weapons would increase total casualties, which is what you need to provide a reason for us to work against them.

There is also a flaw in the logic of this argument, which is the fact that it applies equally well to some other methods of waging war. In particular, having a human remotely control a military vehicle would have the same impact here as having a fully autonomous military vehicle. So if LAWs were banned, but robot technology turned out to be pretty good and governments wanted to protect soldiers' lives, we would have a similar result.

Second, there is the worry that autonomous weapons will make tense military situations between non-belligerent nations less stable and more escalatory, prompting new outbreaks of war. I don't know what reason there is to expect a loss in stability in tense situations; if militaries decide that machines are competent enough to replace humans in battlefield decision making, then they will probably be at least as good at avoiding errors. They do have faster response times - cutting humans out of the loop causes actions to happen faster, enabling a quicker outbreak of violence and escalation of tactical situations. However, the flip side of this is that having humans not be present in these kinds of situations implies that outbreaks of violence will have less political sting and therefore more chance of ending with a peaceful solution. A country can always be compensated for lost machinery through diplomatic negotiations and financial concessions; the same cannot be said for lost soldiers.

Third, you might say that LAWs will prompt an arms race in AI, reducing safety. But faster AI development will help us avoid other kinds of risks unrelated to AI, and it will expedite humanity's progress and expansion towards a future with exponentially growing value. Moreover, there is already substantial AI development in civilian sectors as well as non-battlefield military use, and all of these things have competitive dynamics. AGI would have such broad applications that restricting its use in one or two domains is unlikely to make a large difference; after all, economic power is the source of all military power, and international public opinion has nontrivial importance in international relations, and AI can help nations beat their competitors at both.

Moreover, no military is currently at the cutting edge of AI or machine learning (as far as we can tell). The top research is done in academia and the tech industry; militaries all over the world are just trying to adopt existing techniques for their own use, and don't have the best talent to do so. Finally, if there is in fact a security dilemma regarding AI weaponry, then activism to stop it is unlikely to be fruitful. The literature on the utility of arms control in international relations is mixed to say the least; it seems to work only as long as the weapons are not actually vital for national security.

Finally, one could argue that the existence of LAWs makes it possible for hackers such as an unfriendly advanced AI agent to take charge of them and use them for bad ends. However, in the long run a very advanced AI system would have many tools at its disposal for capturing global resources, such as social manipulation, hacking, nanotechnology, biotechnology, building its own robots, and things which are beyond current human knowledge. A superintelligent agent would probably not be limited by human precautions; making the world as a whole less vulnerable to ASI is not a commonly suggested strategy for AI safety since we assume that once it gets onto the internet then there's not really anything that can be done to stop it. Plus, it's foolish to assume that an AI system with battlefield capabilities, which is just as good at general reasoning as the humans it replaced, would be vulnerable to a simple hack or takeover in a way that humans aren't. If a machine can perform complex computations and inference regarding military rules, its duties on the battlefield, and the actions it can take, then it's likely to have the same intrinsic resistance and skepticism about strange and apparently unlawful orders that human soldiers do. Our mental model of the LAWs of the far future should not be something like a calculator with easy-to-access buttons or a computer with a predictable response to adversarial inputs.

And in the near run, more autonomy would not necessarily make things any less secure than they are with many other technologies which we currently rely on. A fighter jet has electronics, as does a power plant. Lots of things can theoretically be hacked, and hacking an LAW to cause some damage isn't necessarily any worse than hacking infrastructure or a manned vehicle. Just replace the GPS coordinates in a JDAM bomb package and you've already figured out how to use our existing equipment to deliberately cause many civilian casualties. Things like this don't happen often, however, because military equipment is generally well hardened and difficult to access in comparison to civilian equipment.

And this brings me to a counterpoint in favor of LAWs. Military equipment is generally more robust than civilian equipment, and putting AI systems in tense situations where many ethics panels and international watchdogs are present is a great place to test their safety and reliability. Nowhere will the requirements of safety, reliability, and ethics be more stringent than in machines whose job it is to take human life. The more development and testing is conducted by militaries in this regard, the room there is for collaboration, testing and lobbying for safety and beneficial standards of ethics that can be applied to many types of AI systems elsewhere in society. We should be involved in this latter process, not a foolhardy dream of banning valuable weaponry.

edit: I forgot that disclosures are popular around here. I just started to work on a computer science research proposal for the Army Research Office. But that doesn't affect my opinions here, which have been the same for a while.

17

0
0

Reactions

0
0

More posts like this

Comments22
Sorted by Click to highlight new comments since: Today at 2:46 PM

The two arguments I most often hear are:

  • Cheap autonomous weapons could greatly decrease the cost of ending life---within a decade they could easily be the cheapest form of terrorism by far, and may eventually be the cheapest mass destruction in general. Think insect-sized drones carrying toxins or explosive charges that are lethal if detonated inside the skull.

  • The greater the military significance of AI, the more difficult it becomes for states to share information and coordinate regarding its development. This might be bad news for safety.

Cheap autonomous weapons could greatly decrease the cost of ending life---within a decade they could easily be the cheapest form of terrorism by far, and may eventually be the cheapest mass destruction in general. Think insect-sized drones carrying toxins or explosive charges that are lethal if detonated inside the skull.

That sounds a lot more expensive than bullets. You can already kill someone for a quarter.

If weaponized insect drones become cheap enough for terrorists to build and use, then counterterrorist organizations will be able to acquire large numbers of potent surveillance tools to find and eliminate their centers of fabrication.

You should have a low prior for new military technologies to alter the balance of power between different types of groups in a specific way. It hasn't happened much in history, because competing groups can take advantage of these technologies too.

The greater the military significance of AI, the more difficult it becomes for states to share information and coordinate regarding its development. This might be bad news for safety.

Of course the military AIs will be kept secret. But the rest of AI work won't be like that. Cruise liners aren't less safe because militaries are secretive about warship design.

Plus, in Bostrom and Armstrong's Race to the Precipice paper, it is shown that uncertainty about other nations' AI capabilities actually makes things safer.

That sounds a lot more expensive than bullets. You can already kill someone for a quarter.

The main cost of killing someone with a bullet is labor. The point is that autonomous weapons reduce the labor required.

alter the balance of power between different types of groups in a specific way.

New technologies do often decrease the cost of killing people and increase the number of civilians who can be killed by a group of fixed size (see: guns, explosives, nuclear weapons).

Fascinating post. I agree that we shouldn't compare LAWs to (a) hypothetical, perfectly consequentialist, ethically coherent, well-trained philosopher-soldiers, but rather to (b) soldiers as the order-following, rules-of-engagement-implementing, semi-roboticized agents they're actually trained to become.

A key issue is the LAWs' chain of commands' legitimacy, and how it's secured.

Mencius Moldbug had some interesting suggestions in Patchwork about how a 'cryptographic chain of command' over LAWs could actually increase the legitimacy and flexibility of governance over lethal force. https://www.amazon.com/dp/B06XG2WNF1

Suppose a state has an armada/horde/flock of formidable LAWS that can potentially destroy or pacify the civilian populace -- an 'invincible robot army'. Who is permitted to issue orders? If the current political leader is voted out of office, but they don't want to leave, and they still have the LAWS 'launch codes', what keeps them from using LAWS to subvert democracy? In a standard human-soldier/secret service agent scenario, the soldiers and agents have been socialized to respect the outcomes of democratic elections, and would balk at defending the would-be dictator. They would literally escort him/her out of the White House. In the LAWs scenario, the soldiers/agents would be helpless against local LAWs under the head of state. The robot army would escort the secret service agents out of the White House until they accept the new dictator.

In other words, I'm not as worried about interstate war or intrastate protests; I'm worried about LAWs radically changing the incentives and opportunities for outright dictatorship. Under the Second Amendment, the standard countervailing force against dictatorship is supposed to be civilian ownership of near-equivalent tech that poses a credible threat against dictatorial imposition of force. But in this invincible-robot-army scenario, that implies civilians would need to be able to own and deploy LAWs too, either individually (so they can function as aggrieved tyrant-assassins) or collectively (so they can form revolutionary militias against gov't LAWs).

I guess this is just another example of an alignment problem - in this case between the LAWs and the citizens, with the citizens somehow able to collectively over-rule a dictator's 'launch codes'. Maybe every citizen has their own crypto key, and they do some kind of blockchain vote system about what the LAWs do and who they obey. This then opens the way to a majoritarian mob rule with LAWs forcibly displacing/genociding targeted minorites -- or the LAWs must embody some 'human/constitutional rights interrupts' that prevent such bullying.

Any suggestions on how to solve this 'chain of command' problem?

All I can say is that if you are going to have machines which can fulfill all the organizational and tactical responsibilities of humans, to create and lead large formations, then they are probably going to have some kind of general intelligence like humans do. That means we can expect and demand that they have a decent moral compass.

But in this invincible-robot-army scenario, that implies civilians would need to be able to own and deploy LAWs too, either individually (so they can function as aggrieved tyrant-assassins) or collectively (so they can form revolutionary militias against gov't LAWs).

We don't have civilian tanks or civilian fighter jets or lots of other things. Revolutions are almost always asymmetric.

All I can say is that if you are going to have machines which can fulfill all the organizational and tactical responsibilities of humans, to create and lead large formations, then they are probably going to have some kind of general intelligence like humans do.

Couldn't it be the case, though, that you have a number of machines that together fulfill all the organizational and tactical responsibilities of humans without having any one of them have general intelligence? Given that humans already function as cogs in a machine (a point you make very well from your experience), this seems very plausible.

In that case, the intelligence could be fairly narrow, and I would think we should not bet too much on the AIs having a moral compass.

If they are narrow in focus, then it might be easier to provide ethical guidance over their scope of operations.

"I don't know what reason there is to expect a loss in stability in tense situations; if militaries decide that machines are competent enough to replace humans in battlefield decision making, then they will probably be at least as good at avoiding errors."

I very much disagree with that. AI and similar algorithms tend to work quite well... until they don't. Often times assumptions are programmed into them which don't always hold, or the training data doesn't quite match the test data. It's probably the case that automated weapons would greatly decrease minor errors, but they could greatly increase the chance of a major error (though this rate might still be small). Consider the 2010 flash crash - the stock market dropped around 10% within minutes, then less than an hour later it bounced back. Why? Because a bunch of algorithms did stuff that we don't really understand while operating under slightly different assumptions than what happened in real life. What's the military equivalent of the flash crash? Something like a bunch of autonomous weapons in US and China starting all out war over some mistake, then stopping just as soon as it started, yet with 100M people dead. The way to avoid this sort of problem is to maintain human oversight, and the best place to draw the line is probably at the decision to kill. Partially autonomous weapons (where someone remotely has to make a decision to kill, or at least approve the decision) could provide almost all the benefit of fully autonomous weapons - including greatly reduced collateral damage - yet would not have the same risk of possibly leading to a military flash crash.

I very much disagree with that. AI and similar algorithms tend to work quite well... until they don't. Often times assumptions are programmed into them which don't always hold, or the training data doesn't quite match the test data.

The same can be said for humans. And remember that we are looking at AI systems conditional upon them being effective enough to replace people on the battlefield. If they make serious errors much more frequently than people do, then it's unlikely that the military will want to use them.

Something like a bunch of autonomous weapons in US and China starting all out war over some mistake, then stopping just as soon as it started, yet with 100M people dead.

That requires automation not just at the tactical level, but all the way up to the theatre level. I don't think we should have AIs in charge of major military commands, but that's kind of a different issue, and it's not going to happen anytime soon. Plus, it's easy enough to control whether machines are in a defensive posture, offensive posture, peaceful posture, etc. We already have to do this with manned military units.

"The same can be said for humans." - no, that's very much not true. Humans have common sense and can relatively easily think generally in novel situations. Regarding your second point, how would you avoid an arms race to a situation where they are acting in that level? It happened to a large degree with the financial sector, so I don't see why the military sphere would be much different. The amount of time from having limited deployment of autonomous weapons to the military being mostly automated likely would not be very large, especially since an arms race could ensure. And I could imagine catastrophes occurring due to errors in machines simply in "peaceful posture," not to mention that this could be very hard to enforce internationally or even determine which countries were breaking the rules. Having a hard cutoff at not letting machines kill without human approval seems much more prudent.

"The same can be said for humans." - no, that's very much not true. Humans have common sense and can relatively easily think generally in novel situations.

But humans commonly make mistakes on the battlefield. They have training which doesn't perfectly match real-world situations. And they can behave in very unexpected ways. There is no shortage of cases where human soldiers break the laws of war, engage in fratricide (intentional or not), commit war crimes and do other bad things.

Regarding your second point, how would you avoid an arms race to a situation where they are acting in that level?

Well it's little different from avoiding a race to a situation where AIs are acting on the level of state governors and corporate CEOs; the same general set of cognitive competencies will enable machines to fulfill all of these roles. Several things are possible - AI never outpaces humans in this kind of role, nations agree to maintain human oversight over AI systems at operational or theater levels, or AIs replace humans in all kinds of military leadership roles. I don't think any of these scenarios is necessarily bad. If AI systems will be intelligent enough to run theater commands better than humans can, then they will be intelligent enough to know the difference between a border scare and a real war. If they can make a plan to outfight an opposing military force then they can make a plan to prepare themselves against unnecessary escalations.

The amount of time from having limited deployment of autonomous weapons to the military being mostly automated likely would not be very large, especially since an arms race could ensure.

Why? Automation is not a game changer. Matchlock revolvers were invented in Germany in the 16th century, but it was not until the 19th century that armies had widespread adoption of repeating firearms. Light automatic infantry weapons were developed in WWI but did not become standardized as individual weapons until the early Cold War. The first guided attack missile was developed in WWI, the Kettering Bug, but iron bombs and unguided rockets have still been used widely in military combat in Vietnam and more recent wars. Automatic tank loaders have been around for more than half a century and still are yet to be adopted by the majority of industrialized nations. In the past 70 years since the end of WW2, tank crews have reduced from 6 to 4 and destroyer crews have reduced from 350 to 250. Not very fast.

not to mention that this could be very hard to enforce internationally or even determine which countries were breaking the rules.

We already don't have international rules preventing countries from keeping their forces on high alert on the border. Countries just don't do it because they know that military mobilization and tensions are not to be taken lightly. Having AIs instead of humans wouldn't change this matter.

Having a hard cutoff at not letting machines kill without human approval seems much more prudent.

No such hard cutoff is possible. What does it mean to have human approval to kill - does each potentially lethal shot need to be approved? Or can the human give ordinary fire orders in a tactical situation the way a commander does with his subordinates?

Hey kbog, Thanks for this. I think this is well argued. If I may, I'd like to pick some holes. I'm not sure if they are sufficient to swing the argument the other way, but I don't think they're trivial either.

I'm going to use autonomy in weapons systems in favour of LAWs for reasons argued here(see Takeaway 1).

As far as I can tell, almost all considerations you give are to inter-state conflict. The intra-state consequences are not explored and I think they deserve to be. Fully autonomous weapons systems potentially obviate the need for a mutually beneficial social contract between the regimes in control of the weapons and the populations over which they rule. All dissent becomes easy to crush. This is patently bad in itself, but it also has consequences for interstate conflict; with less approval needed to go to war, inter-state conflict may increase.

The introduction of weapons systems with high degrees of autonomy poses an arguably serious risk of geopolitical turbulence: it is not clear that all states will develop the capability to produce highly autonomous weapons systems. Those that do not will have to purchase them from technologically-more advanced allies willing to sell them. States that find themselves outside of such alliances will be highly vulnerable to attack. This may motivate a nontrivial reshuffling of global military alliances, the outcomes of which are hard to predict. For those without access to these new powerful weapons, one risk mitigation strategy is to develop nuclear weapons, potentially motivating nuclear proliferation.

On your point:

The logic here is a little bit gross, since it's saying that we should make sure that ordinary soldiers like me die for the sake of the greater good of manipulating the political system and it also implies that things like body armor and medics should be banned from the battlefield, but I won't worry about that here because this is a forum full of consequentialists and I honestly think that consequentialist arguments are valid anyway.

My argument here isn't hugely important but I take some issue with the analogies. I prefer thinking in terms of both actors agreeing on acceptable level of vulnerability in order to reduce the risk of conflict. In this case, a better analogy is to the Cold War agreement not to build comprehensive ICBM defenses, an analogy which would come out in favour of limiting autonomy in weapons systems. But neither of us are placing much importance on this point overall.

I'd like to unpack this point a little bit:

Third, you might say that LAWs will prompt an arms race in AI, reducing safety. But faster AI development will help us avoid other kinds of risks unrelated to AI, and it will expedite humanity's progress and expansion towards a future with exponentially growing value. Moreover, there is already substantial AI development in civilian sectors as well as non-battlefield military use, and all of these things have competitive dynamics. AGI would have such broad applications that restricting its use in one or two domains is unlikely to make a large difference; after all, economic power is the source of all military power, and international public opinion has nontrivial importance in international relations, and AI can help nations beat their competitors at both.

I believe discourse on AI risks often conflates 'AI arms race' with 'race to the finish'. While these races are certainly linked, and therefore the conflation justified in some senses, I think it trips up the argument in this case. In an AI arms race, we should be concerned about the safety of non-AGI systems, which may be neglected in an arms race scenario. This weakens the argument that highly autonomous weapons systems might lead to fewer civilian casualties, as this is likely the sort of safety measure that might be neglected when racing to develop weapons systems capable of out-doing the ever more capable weapons of one's rival.

The second sentence only holds if the safety issue is solved, so I don't accept the argument that it will help humanity reach a future exponentially growing in value (at least insofar as we're talking about the long run future, as there may be some exponential progress in the near-term).

It could simply be my reading, but I'm not entirely clear on the point made across the third and fourth sentences, and I don't think they give a compelling case that we shouldn't try to avoid military application or avoid exacerbating race dynamics.

Lastly, while I think you've given a strong case to soften opposition to advancing autonomy in weapons systems, the argument against any regulation of these weapons hasn't been made. Not all actors seek outright bans, and I think it'd be worth acknowledging that (contrary to the title) there are some undesirable things with highly autonomous weapons systems and that we should like to impose some regulations on them such as, for example, some minimum safety requirements that help reduce civilian casualties.

Overall, I think the first point I made should cause serious pause, and it's the largest single reason I don't agree with your overall argument, as many good points as you make here.

(And to avoid any suspicions: despite arguing on his side, coming from the same city, and having the same rare surname, I am of no known relation to Noel Sharkey of the Stop Killer Robots Campaign, though I confess a pet goal to meet him for a pint one day.)

Hmm, everything that I mentioned applies to interstate conflict, but they don't all only apply to interstate conflict. Intrastate conflicts might be murkier and harder to analyze, and I think they are something to be looked at, but I'm not sure how much it would modify the main points. The assumptions of the expected utility theory of conflict do get invalidated.

Fully autonomous weapons systems potentially obviate the need for a mutually beneficial social contract between the regimes in control of the weapons and the populations over which they rule. All dissent becomes easy to crush.

Well, firstly, I am of the opinion that most instances of violent resistance against governments in history were unjustified, and that a general reduction in revolutionary violence would do more good than harm. Peaceful resistance is more effective at political change than violent resistance anyway (https://www.psychologytoday.com/blog/sex-murder-and-the-meaning-life/201404/violent-versus-nonviolent-revolutions-which-way-wins). You could argue that governments will become more oppressive and less responsive to peaceful resistance if they have better security against hypothetical revolutions, though I don't have a large expectation for this to happen, at least in the first world.

Second, this doesn't have much to do with autonomous weapons in particular. It applies to all methods by which the government can suppress dissent, all military and police equipment.

Third, lethal force is a small and rare part of suppressing protests and dissent as long as full-fledged rebellion doesn't break out. Modern riot police are equipped with nonlethal weapons; we can expect that any country with the ability to deploy robots would have professional capabilities for riot control and the deployment of nonlethal weapons. And crowd control is based more on psychology and appearances than application of kinetic force.

Finally, even when violent rebellion does break out, nonstate actors such as terrorists and rebels are outgunned anyway. Governments trying to pacify rebellions need to work with the local population, gather intelligence, and assert their legitimacy in the eyes of the populace. Lethal autonomous weapons are terrible for all of these things. They would be very good for the application of quick precise firepower at low risk to friendly forces, but that is far from the greatest problem faced by governments seeking to suppress dissent.

The one thing that implies that rebellion would become less frequent in a country with LAWs is that an army of AGI robots could allow leadership to stop a rebellion without worrying about the loyalty of police and soldiers. By that time, probably we should just make sure that machines have ethical guidelines not to kill their own people, support evil governments and similar things. I can see this being a problem, but it's a little too far out and speculative to make plans around it.

This is patently bad in itself, but it also has consequences for interstate conflict; with less approval needed to go to war, inter-state conflict may increase.

The opposite is at least as likely. Nations often go to war in order to maintain legitimacy in the eyes of the population. Argentina's Falklands venture was a good example of this 'diversionary foreign policy' (https://en.wikipedia.org/wiki/Diversionary_foreign_policy).

The introduction of weapons systems with high degrees of autonomy poses an arguably serious risk of geopolitical turbulence: it is not clear that all states will develop the capability to produce highly autonomous weapons systems. Those that do not will have to purchase them from technologically-more advanced allies willing to sell them. States that find themselves outside of such alliances will be highly vulnerable to attack. This may motivate a nontrivial reshuffling of global military alliances, the outcomes of which are hard to predict.

How would AI be any different here from other kinds of technological progress? And I don't think that the advent of new military technology has major impacts on geopolitical alliances. I actually cannot think of a case where alliances shifted because of new military technology. Military exports and license production are common among non-allies, and few alliances lack advanced industrial powers; right now there are very few countries in the world which are not on good enough terms with at least one highly developed military power to buy weapons from them.

In an AI arms race, we should be concerned about the safety of non-AGI systems, which may be neglected in an arms race scenario. This weakens the argument that highly autonomous weapons systems might lead to fewer civilian casualties, as this is likely the sort of safety measure that might be neglected when racing to develop weapons systems capable of out-doing the ever more capable weapons of one's rival.

But the same dynamic is present when nations compete with non-AI weapons. The demand for potent firepower implies that systems will cause collateral damage and that soldiers will not be as trained or disciplined on ROE as they could be.

The second sentence only holds if the safety issue is solved, so I don't accept the argument that it will help humanity reach a future exponentially growing in value (at least insofar as we're talking about the long run future, as there may be some exponential progress in the near-term).

Well, of course nothing matters if there is an existential catastrophe. But you can't go into this with the assumption that AI will cause an existential catastrophe. It likely won't, and in all those scenarios, quicker AI development is likely better. Does this mean that AI should be developed quicker, all-things-considered? I don't know, I'm just saying that overall it's not clear that it should be developed more slowly.

It could simply be my reading, but I'm not entirely clear on the point made across the third and fourth sentences, and I don't think they give a compelling case that we shouldn't try to avoid military application or avoid exacerbating race dynamics.

I just mean that military use is a comparatively small part of the overall pressure towards quicker AI development.

Lastly, while I think you've given a strong case to soften opposition to advancing autonomy in weapons systems, the argument against any regulation of these weapons hasn't been made. Not all actors seek outright bans, and I think it'd be worth acknowledging that (contrary to the title) there are some undesirable things with highly autonomous weapons systems and that we should like to impose some regulations on them such as, for example, some minimum safety requirements that help reduce civilian casualties.

There are things that are wrong with AI weapons in that they are, after all, weapons, and there is always something wrong with weapons. But I think there is nothing that makes AI weapons overall worse than ordinary ones.

I don't think that regulating them is necessarily bad. I did say at the end that testing, lobbying, international watchdogs, etc are the right direction to go in. I haven't thought this through, but my first instinct is to say that autonomous systems should simply follow all the same regulations and laws that soldiers do today. Whenever a nation ratifies an international treaty on military conduct, such as the Geneva Convention, its norms should apply to autonomous systems as well as soldiers. That sounds sufficient to me, at first glance.

This is a really interesting write-up and definitely persuaded me quite a bit. One thing I see coming up the answers a lot, at least with Geoffrey Miller and Lee Sharkey, is that resistance to abuses of power does not involve matching firepower with firepower but instead can be done by civil resistance. I've read a good bit on the literature on civil resistance movements (people like Erica Chenoweth and Sidney Tarrow), and my impression is that LAWs could hinder the ability to resist civilly as well. For one, Erica Chenoweth makes a big point about how one of the key mechanisms of success for a civil resistance movement is getting members of the regime your challenging to defect. If the essential members are robots, this seems like a much more difficult task. Sure, you can try to build in some alignment mechanism, but that seems like a risky bet. More generally, noncompliance with roles people are expected to play is a large part of what makes civil resistance work. Movements grow to encompass people who the regime depends upon, and those people gum up the works. Again, couldn't robots take away the possibility of doing this?

Also, I think I am one of the people who was talking about this recently in my blog post last week on the subject, and I posted an update today that I've moved somewhat away from my original position, in part because of the thoughtful responses of people like you.

I think the title may be technically correct but sounds nasty.

For nitpicking side, I would argue that AI weapons problem mostly depends on their level of intelligence. If it is just narrow AI -ok. However, the greater is their intelligence, the greater is the danger and it may reach catastrophic levels before superintelligence will be created.

I would also add that superintelligence created by the military may be perfectly aligned, but still catastrophically dangerous if it is used as a universal weapon against perhaps another military superintelligence. And the first step for not creating military superintelligence - starts from non creating AI weapons.

I would also add that superintelligence created by the military may be perfectly aligned, but still catastrophically dangerous if it is used as a universal weapon against perhaps another military superintelligence.

A superintelligence would have the ability and (probably) interest to shape the entire world. Whether it comes from the military, a corporation, or a government, it will have a compelling instrumental motivation to neutralize other superintelligences.

As Geoffrey suggests below, the 'political economy' (to use the term loosely) of robot armies seems quite bad. See for example the argument here: https://noahpinionblog.blogspot.com/2014/03/the-robot-lords-and-end-of-people-power.html .

If robots are cheap and effective compared to human soldiers, then the common people can get robots to fight as well.

Remember that F-15s and MRAPs are already far more powerful than anything owned by private citizens, and 600 years ago a man-at-arms was impervious to most peasant weapons. Revolution and civil stability is not about sheer military force.

F-15s and MRAPs still have to be operated by multiple people, which requires incentive alignment between many parties. Some autonomous weapons in the future may be able to self-sustain and repair, (or be a part of a self-sustaining autonomous ecosystem) which would mean that they can be used while being aligned with fewer people's interests.

A man-at arms wouldn't be able to take out a whole town by himself if more than a few peasants coordinate with pitchforks, but depending on how LAWS are developed, a very small group of people could dominate the world.

I actually agree with a lot of your arguments, but I don't agree overall. AI weapons will be good and bad in many ways, and if the are good or bad overall depends on who has control, how well they are made, and the dynamics of how different countries race and adapt.

Thanks again for the interesting post. After rereading I have some more thoughts on the topic.

I would add that LAW is not the same as Military AI, and LAW as the safest part of the military AI. M.Maas showed that Military AI consists of the several layers, where LAWs are on the lowest. https://hcss.nl/report/artificial-intelligence-and-future-defense

An advanced Military AI will probably include several other functions (some already exist):

1.Strategic planning of winning in war

2.Direct control of all units inside the country's defence systems, which may include drones, ships, nuclear weapons, humans, and other large and small units

3Nuclear deterrence part, which consists of the early warning system and dead hand second strike system.

4Manufacturing and constructing new advanced weapons

5Cyberweapons, that is instruments "to elect Trump" or to turn off adversaries' AI or other critical infrastructure.

Each of this 5 levels could have a global catastrophic failure, even without starting uncontrollable self-improving.

1.Strategic planning may have superhuman winning ability (think about AlphaGo Zero, but used as general) or could have a failure if it suggests "to strike first now or lose forever",

2 Global army controlling system could propagate a wrong command.

3 The Early warning system could create false alarm (had happened before). there also could be flash-crash stile unexpected war between two Military AIs of two adversarial nation states.

4Weapons manufacturing AI may be unexpectedly effective in creating very dangerous weapons, which later will be used with global consequences, more severe than nuclear war.

5Use of cyberweapons also may be regarded as an act of war or help to elect a dangerously unstable president (some think that this already happened with DT). Cyberwar may also affect other's side critical infrastructure or rewrite other's side AI goal function, which is bad outcomes.