13

Daniel_Eth comments on Nothing Wrong With AI Weapons - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread. Show more comments above.

Comment author: kbog  (EA Profile) 29 August 2017 02:36:12AM *  2 points [-]

I very much disagree with that. AI and similar algorithms tend to work quite well... until they don't. Often times assumptions are programmed into them which don't always hold, or the training data doesn't quite match the test data.

The same can be said for humans. And remember that we are looking at AI systems conditional upon them being effective enough to replace people on the battlefield. If they make serious errors much more frequently than people do, then it's unlikely that the military will want to use them.

Something like a bunch of autonomous weapons in US and China starting all out war over some mistake, then stopping just as soon as it started, yet with 100M people dead.

That requires automation not just at the tactical level, but all the way up to the theatre level. I don't think we should have AIs in charge of major military commands, but that's kind of a different issue, and it's not going to happen anytime soon. Plus, it's easy enough to control whether machines are in a defensive posture, offensive posture, peaceful posture, etc. We already have to do this with manned military units.

Comment author: Daniel_Eth 29 August 2017 06:59:02AM 1 point [-]

"The same can be said for humans." - no, that's very much not true. Humans have common sense and can relatively easily think generally in novel situations. Regarding your second point, how would you avoid an arms race to a situation where they are acting in that level? It happened to a large degree with the financial sector, so I don't see why the military sphere would be much different. The amount of time from having limited deployment of autonomous weapons to the military being mostly automated likely would not be very large, especially since an arms race could ensure. And I could imagine catastrophes occurring due to errors in machines simply in "peaceful posture," not to mention that this could be very hard to enforce internationally or even determine which countries were breaking the rules. Having a hard cutoff at not letting machines kill without human approval seems much more prudent.

Comment author: kbog  (EA Profile) 29 August 2017 08:59:30AM *  2 points [-]

"The same can be said for humans." - no, that's very much not true. Humans have common sense and can relatively easily think generally in novel situations.

But humans commonly make mistakes on the battlefield. They have training which doesn't perfectly match real-world situations. And they can behave in very unexpected ways. There is no shortage of cases where human soldiers break the laws of war, engage in fratricide (intentional or not), commit war crimes and do other bad things.

Regarding your second point, how would you avoid an arms race to a situation where they are acting in that level?

Well it's little different from avoiding a race to a situation where AIs are acting on the level of state governors and corporate CEOs; the same general set of cognitive competencies will enable machines to fulfill all of these roles. Several things are possible - AI never outpaces humans in this kind of role, nations agree to maintain human oversight over AI systems at operational or theater levels, or AIs replace humans in all kinds of military leadership roles. I don't think any of these scenarios is necessarily bad. If AI systems will be intelligent enough to run theater commands better than humans can, then they will be intelligent enough to know the difference between a border scare and a real war. If they can make a plan to outfight an opposing military force then they can make a plan to prepare themselves against unnecessary escalations.

The amount of time from having limited deployment of autonomous weapons to the military being mostly automated likely would not be very large, especially since an arms race could ensure.

Why? Automation is not a game changer. Matchlock revolvers were invented in Germany in the 16th century, but it was not until the 19th century that armies had widespread adoption of repeating firearms. Light automatic infantry weapons were developed in WWI but did not become standardized as individual weapons until the early Cold War. The first guided attack missile was developed in WWI, the Kettering Bug, but iron bombs and unguided rockets have still been used widely in military combat in Vietnam and more recent wars. Automatic tank loaders have been around for more than half a century and still are yet to be adopted by the majority of industrialized nations. In the past 70 years since the end of WW2, tank crews have reduced from 6 to 4 and destroyer crews have reduced from 350 to 250. Not very fast.

not to mention that this could be very hard to enforce internationally or even determine which countries were breaking the rules.

We already don't have international rules preventing countries from keeping their forces on high alert on the border. Countries just don't do it because they know that military mobilization and tensions are not to be taken lightly. Having AIs instead of humans wouldn't change this matter.

Having a hard cutoff at not letting machines kill without human approval seems much more prudent.

No such hard cutoff is possible. What does it mean to have human approval to kill - does each potentially lethal shot need to be approved? Or can the human give ordinary fire orders in a tactical situation the way a commander does with his subordinates?