By Kyle Bogosian
With all the recent worries over AI risks, a lot of people have raised fears about lethal autonomous weapons (LAWs) which take the place of soldiers on the battlefield. Specifically, in the news recently: Elon Musk and over 100 experts requested that the UN implement a ban. https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war
However, we should not dedicate efforts towards this goal. I don't know if anyone in the Effective Altruist community has, but I have seen many people talk about it, and I have seen FLI dedicate nontrivial effort towards aggregating and publishing views against the use of LAWs. I don't think we should be engaged in any of these activities to try and stop the implementation of LAWs, so first I will answer worries about the dangers of LAWs, and then I will point out a benefit.
The first class of worries is that it is morally wrong to kill someone with an LAW - specifically, that it is more morally wrong than killing someone in a different way. These nonconsequentialist arguments hold that the badness of death has something to do with factors other than the actual suffering and deprivation caused to the victim, the the victim's family, or society at large. There is a lot of philosophical literature on this issue, generally relating to the idea that machines don't have the same agency, moral responsibility, or moral judgement that humans do, or something of the sort. I'm going to mostly assume that people here aren't persuaded by these philosophical arguments in the first place, because this is a lazy forum post, it would take a lot of time to read and answer all the arguments on this topic, and most people here are consequentialists.
I will say one thing though, which hasn't been emphasized before and undercuts many of the arguments alleging that death by AI is intrinsically immoral: in contrast to the typical philosopher's abstract understanding of killing in war, soldiers do not kill after some kind of pure process of ethical deliberation which demonstrates that they are acting morally. Soldiers learn to fight as a mechanical procedure, with the motivation of protection and success on the battlefield, and their ethical standard is to follow orders as long as those orders are lawful. Infantry soldiers often don't target individual enemies; rather, they lay down suppressive fire upon enemy positions and use weapons with a large area of effect, such as machine-guns and grenades. They don't think about each kill in ethical terms, they just memorize their Rules Of Engagement, which is an algorithm that determines when you can or can't use deadly force upon another human. Furthermore, military operations involve the use of large systems where there it is difficult to determine a single person who has the responsibility for a kinetic effect. In artillery bombardments for instance, an officer in the field will order his artillery observer to make a request for support or request it himself based on an observation of enemy positions which may be informed by prior intelligence analysis done by others. The requested coordinates are checked by a fire direction center for avoidance of collateral damage and fratricide, and if approved then the angle for firing is relayed to the gun line. The gun crews carry out the request. Permissions and procedures for this process are laid out beforehand. At no point does one person sit down and carry out philosophical deliberation on whether the killing is moral - it is just a series of people doing their individual jobs making sure that a bunch of things are being done correctly. The system as a whole looks just as grand and impersonal as automatic weaponry does. (I speak from experience, having served on a field artillery unit.)
When someone in the military screws up and gets innocents killed, the blame often falls upon the commander who had improper procedures in place, not some individual who lost his moral compass. This implies that there is no problem with the attribution of responsibility for an LAW screwing up: it will likewise go to the engineer/programmer who had improper procedures in place. So if killing by AI is immoral because of the lack of individual moral responsibility or the lack of moral deliberation, then killing by soldiers is not really any better and we shouldn't care about replacing one with the other.
So, on we go to the consequential harms of LAWs.
First, there is the worry that it will make war more frequent, since nations don't have to worry about losing soldiers, thereby increasing civilian deaths. This worry is attributed to unnamed experts in the Guardian article linked above. The logic here is a little bit gross, since it's saying that we should make sure that ordinary soldiers like me die for the sake of the greater good of manipulating the political system and it also implies that things like body armor and medics should be banned from the battlefield, but I won't worry about that here because this is a forum full of consequentialists and I honestly think that consequentialist arguments are valid anyway.
But the argument assumes that the loss of machines is not an equal cost to governments. If nations are indifferent to whether their militaries have soldiers or equally competent machines, then the machines have the same cost as soldiers, so there will be no difference in the expected utility of warfare. If machine armies are better than human soldiers, but also more expensive overall, and nations just care about security and economic costs, then it seems that nations will go to war less frequently, in order to preserve their expensive and better-than-human machines. However, you might believe (with good reason) that nations respond disproportionately to the loss of life on the battlefield, will go to great lengths to avoid it, and will end up with a system that enables them to go to war for less overall cost.
Well, in undergrad I wrote a paper on the expected utility of war (https://docs.google.com/document/d/1eGzG4la4a96ueQl-uJD03voXVhsXLrbUw0UDDWbSzJA/edit?usp=sharing). Assuming Eckhardt (1989)'s figure of the civilian casualty ratio (https://en.wikipedia.org/wiki/Civilian_casualty_ratio) being 50%, I found that proliferation of robots on the battlefield would only increase total casualties if nations considered the difference between the loss of human armies in wartime and the loss of comparable machines in wartime to be more than 1/3 of the total costs of war. Otherwise, robots on the battlefield would decrease total casualties. It seems to me like it could go either way, particularly with robot weapons having a more positive impact in wars of national security and a more negative impact in wars of foreign intervention and peacekeeping. While I can't demonstrate that robotic weapons will reduce the total amount of death and destruction caused by war, there is not a clear case that robot weapons would increase total casualties, which is what you need to provide a reason for us to work against them.
There is also a flaw in the logic of this argument, which is the fact that it applies equally well to some other methods of waging war. In particular, having a human remotely control a military vehicle would have the same impact here as having a fully autonomous military vehicle. So if LAWs were banned, but robot technology turned out to be pretty good and governments wanted to protect soldiers' lives, we would have a similar result.
Second, there is the worry that autonomous weapons will make tense military situations between non-belligerent nations less stable and more escalatory, prompting new outbreaks of war. I don't know what reason there is to expect a loss in stability in tense situations; if militaries decide that machines are competent enough to replace humans in battlefield decision making, then they will probably be at least as good at avoiding errors. They do have faster response times - cutting humans out of the loop causes actions to happen faster, enabling a quicker outbreak of violence and escalation of tactical situations. However, the flip side of this is that having humans not be present in these kinds of situations implies that outbreaks of violence will have less political sting and therefore more chance of ending with a peaceful solution. A country can always be compensated for lost machinery through diplomatic negotiations and financial concessions; the same cannot be said for lost soldiers.
Third, you might say that LAWs will prompt an arms race in AI, reducing safety. But faster AI development will help us avoid other kinds of risks unrelated to AI, and it will expedite humanity's progress and expansion towards a future with exponentially growing value. Moreover, there is already substantial AI development in civilian sectors as well as non-battlefield military use, and all of these things have competitive dynamics. AGI would have such broad applications that restricting its use in one or two domains is unlikely to make a large difference; after all, economic power is the source of all military power, and international public opinion has nontrivial importance in international relations, and AI can help nations beat their competitors at both.
Moreover, no military is currently at the cutting edge of AI or machine learning (as far as we can tell). The top research is done in academia and the tech industry; militaries all over the world are just trying to adopt existing techniques for their own use, and don't have the best talent to do so. Finally, if there is in fact a security dilemma regarding AI weaponry, then activism to stop it is unlikely to be fruitful. The literature on the utility of arms control in international relations is mixed to say the least; it seems to work only as long as the weapons are not actually vital for national security.
Finally, one could argue that the existence of LAWs makes it possible for hackers such as an unfriendly advanced AI agent to take charge of them and use them for bad ends. However, in the long run a very advanced AI system would have many tools at its disposal for capturing global resources, such as social manipulation, hacking, nanotechnology, biotechnology, building its own robots, and things which are beyond current human knowledge. A superintelligent agent would probably not be limited by human precautions; making the world as a whole less vulnerable to ASI is not a commonly suggested strategy for AI safety since we assume that once it gets onto the internet then there's not really anything that can be done to stop it. Plus, it's foolish to assume that an AI system with battlefield capabilities, which is just as good at general reasoning as the humans it replaced, would be vulnerable to a simple hack or takeover in a way that humans aren't. If a machine can perform complex computations and inference regarding military rules, its duties on the battlefield, and the actions it can take, then it's likely to have the same intrinsic resistance and skepticism about strange and apparently unlawful orders that human soldiers do. Our mental model of the LAWs of the far future should not be something like a calculator with easy-to-access buttons or a computer with a predictable response to adversarial inputs.
And in the near run, more autonomy would not necessarily make things any less secure than they are with many other technologies which we currently rely on. A fighter jet has electronics, as does a power plant. Lots of things can theoretically be hacked, and hacking an LAW to cause some damage isn't necessarily any worse than hacking infrastructure or a manned vehicle. Just replace the GPS coordinates in a JDAM bomb package and you've already figured out how to use our existing equipment to deliberately cause many civilian casualties. Things like this don't happen often, however, because military equipment is generally well hardened and difficult to access in comparison to civilian equipment.
And this brings me to a counterpoint in favor of LAWs. Military equipment is generally more robust than civilian equipment, and putting AI systems in tense situations where many ethics panels and international watchdogs are present is a great place to test their safety and reliability. Nowhere will the requirements of safety, reliability, and ethics be more stringent than in machines whose job it is to take human life. The more development and testing is conducted by militaries in this regard, the room there is for collaboration, testing and lobbying for safety and beneficial standards of ethics that can be applied to many types of AI systems elsewhere in society. We should be involved in this latter process, not a foolhardy dream of banning valuable weaponry.
edit: I forgot that disclosures are popular around here. I just started to work on a computer science research proposal for the Army Research Office. But that doesn't affect my opinions here, which have been the same for a while.
Hmm, everything that I mentioned applies to interstate conflict, but they don't all only apply to interstate conflict. Intrastate conflicts might be murkier and harder to analyze, and I think they are something to be looked at, but I'm not sure how much it would modify the main points. The assumptions of the expected utility theory of conflict do get invalidated.
Well, firstly, I am of the opinion that most instances of violent resistance against governments in history were unjustified, and that a general reduction in revolutionary violence would do more good than harm. Peaceful resistance is more effective at political change than violent resistance anyway (https://www.psychologytoday.com/blog/sex-murder-and-the-meaning-life/201404/violent-versus-nonviolent-revolutions-which-way-wins). You could argue that governments will become more oppressive and less responsive to peaceful resistance if they have better security against hypothetical revolutions, though I don't have a large expectation for this to happen, at least in the first world.
Second, this doesn't have much to do with autonomous weapons in particular. It applies to all methods by which the government can suppress dissent, all military and police equipment.
Third, lethal force is a small and rare part of suppressing protests and dissent as long as full-fledged rebellion doesn't break out. Modern riot police are equipped with nonlethal weapons; we can expect that any country with the ability to deploy robots would have professional capabilities for riot control and the deployment of nonlethal weapons. And crowd control is based more on psychology and appearances than application of kinetic force.
Finally, even when violent rebellion does break out, nonstate actors such as terrorists and rebels are outgunned anyway. Governments trying to pacify rebellions need to work with the local population, gather intelligence, and assert their legitimacy in the eyes of the populace. Lethal autonomous weapons are terrible for all of these things. They would be very good for the application of quick precise firepower at low risk to friendly forces, but that is far from the greatest problem faced by governments seeking to suppress dissent.
The one thing that implies that rebellion would become less frequent in a country with LAWs is that an army of AGI robots could allow leadership to stop a rebellion without worrying about the loyalty of police and soldiers. By that time, probably we should just make sure that machines have ethical guidelines not to kill their own people, support evil governments and similar things. I can see this being a problem, but it's a little too far out and speculative to make plans around it.
The opposite is at least as likely. Nations often go to war in order to maintain legitimacy in the eyes of the population. Argentina's Falklands venture was a good example of this 'diversionary foreign policy' (https://en.wikipedia.org/wiki/Diversionary_foreign_policy).
How would AI be any different here from other kinds of technological progress? And I don't think that the advent of new military technology has major impacts on geopolitical alliances. I actually cannot think of a case where alliances shifted because of new military technology. Military exports and license production are common among non-allies, and few alliances lack advanced industrial powers; right now there are very few countries in the world which are not on good enough terms with at least one highly developed military power to buy weapons from them.
But the same dynamic is present when nations compete with non-AI weapons. The demand for potent firepower implies that systems will cause collateral damage and that soldiers will not be as trained or disciplined on ROE as they could be.
Well, of course nothing matters if there is an existential catastrophe. But you can't go into this with the assumption that AI will cause an existential catastrophe. It likely won't, and in all those scenarios, quicker AI development is likely better. Does this mean that AI should be developed quicker, all-things-considered? I don't know, I'm just saying that overall it's not clear that it should be developed more slowly.
I just mean that military use is a comparatively small part of the overall pressure towards quicker AI development.
There are things that are wrong with AI weapons in that they are, after all, weapons, and there is always something wrong with weapons. But I think there is nothing that makes AI weapons overall worse than ordinary ones.
I don't think that regulating them is necessarily bad. I did say at the end that testing, lobbying, international watchdogs, etc are the right direction to go in. I haven't thought this through, but my first instinct is to say that autonomous systems should simply follow all the same regulations and laws that soldiers do today. Whenever a nation ratifies an international treaty on military conduct, such as the Geneva Convention, its norms should apply to autonomous systems as well as soldiers. That sounds sufficient to me, at first glance.