kbog comments on Nothing Wrong With AI Weapons - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread. Show more comments above.

Comment author: kbog  (EA Profile) 29 August 2017 08:59:30AM *  2 points [-]

"The same can be said for humans." - no, that's very much not true. Humans have common sense and can relatively easily think generally in novel situations.

But humans commonly make mistakes on the battlefield. They have training which doesn't perfectly match real-world situations. And they can behave in very unexpected ways. There is no shortage of cases where human soldiers break the laws of war, engage in fratricide (intentional or not), commit war crimes and do other bad things.

Regarding your second point, how would you avoid an arms race to a situation where they are acting in that level?

Well it's little different from avoiding a race to a situation where AIs are acting on the level of state governors and corporate CEOs; the same general set of cognitive competencies will enable machines to fulfill all of these roles. Several things are possible - AI never outpaces humans in this kind of role, nations agree to maintain human oversight over AI systems at operational or theater levels, or AIs replace humans in all kinds of military leadership roles. I don't think any of these scenarios is necessarily bad. If AI systems will be intelligent enough to run theater commands better than humans can, then they will be intelligent enough to know the difference between a border scare and a real war. If they can make a plan to outfight an opposing military force then they can make a plan to prepare themselves against unnecessary escalations.

The amount of time from having limited deployment of autonomous weapons to the military being mostly automated likely would not be very large, especially since an arms race could ensure.

Why? Automation is not a game changer. Matchlock revolvers were invented in Germany in the 16th century, but it was not until the 19th century that armies had widespread adoption of repeating firearms. Light automatic infantry weapons were developed in WWI but did not become standardized as individual weapons until the early Cold War. The first guided attack missile was developed in WWI, the Kettering Bug, but iron bombs and unguided rockets have still been used widely in military combat in Vietnam and more recent wars. Automatic tank loaders have been around for more than half a century and still are yet to be adopted by the majority of industrialized nations. In the past 70 years since the end of WW2, tank crews have reduced from 6 to 4 and destroyer crews have reduced from 350 to 250. Not very fast.

not to mention that this could be very hard to enforce internationally or even determine which countries were breaking the rules.

We already don't have international rules preventing countries from keeping their forces on high alert on the border. Countries just don't do it because they know that military mobilization and tensions are not to be taken lightly. Having AIs instead of humans wouldn't change this matter.

Having a hard cutoff at not letting machines kill without human approval seems much more prudent.

No such hard cutoff is possible. What does it mean to have human approval to kill - does each potentially lethal shot need to be approved? Or can the human give ordinary fire orders in a tactical situation the way a commander does with his subordinates?