4

Military AI as a Convergent Goal of Self-Improving AI

My new paper with David Denkenberger.   "Military AI as a Convergent Goal of Self-Improving AI" . Forthcoming as a chapter in Artificial Safety And Security (Roman V. Yampolskiy, ed.), CRC Press.   Abstract Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny... Read More
Comment author: turchin 01 November 2017 09:12:46PM *  0 points [-]

Thanks again for the interesting post. After rereading I have some more thoughts on the topic.

I would add that LAW is not the same as Military AI, and LAW as the safest part of the military AI. M.Maas showed that Military AI consists of the several layers, where LAWs are on the lowest. https://hcss.nl/report/artificial-intelligence-and-future-defense

An advanced Military AI will probably include several other functions (some already exist):

1.Strategic planning of winning in war

2.Direct control of all units inside the country's defence systems, which may include drones, ships, nuclear weapons, humans, and other large and small units

3Nuclear deterrence part, which consists of the early warning system and dead hand second strike system.

4Manufacturing and constructing new advanced weapons

5Cyberweapons, that is instruments "to elect Trump" or to turn off adversaries' AI or other critical infrastructure.

Each of this 5 levels could have a global catastrophic failure, even without starting uncontrollable self-improving.

1.Strategic planning may have superhuman winning ability (think about AlphaGo Zero, but used as general) or could have a failure if it suggests "to strike first now or lose forever",

2 Global army controlling system could propagate a wrong command.

3 The Early warning system could create false alarm (had happened before). there also could be flash-crash stile unexpected war between two Military AIs of two adversarial nation states.

4Weapons manufacturing AI may be unexpectedly effective in creating very dangerous weapons, which later will be used with global consequences, more severe than nuclear war.

5Use of cyberweapons also may be regarded as an act of war or help to elect a dangerously unstable president (some think that this already happened with DT). Cyberwar may also affect other's side critical infrastructure or rewrite other's side AI goal function, which is bad outcomes.

Comment author: turchin 31 October 2017 10:16:02PM 1 point [-]

I am a great fan of the bacteria which will be able to convert methane into food. Basically, it would balance two markets which both suffer sometimes of oversupply or undersupply: fossil fuels market and food market. If we will be able to move the excess of supply in one of them into another, both will be more stable. It would support also fight with climate change, as natural gas is 4-5 times cleaner than coal.

I also think that methane producers will be happy to pay for such bridge as it will be a new market for them, and in that case, we will reach the negative cost of the earth-saving intervention. Negative cost should be a gold standard for EA creativity, as in that case we don't need to collect money, but just suggest an idea - and some entertainers will pay for it. However, some initial investment in the analysing and promoting the idea are needed.

Comment author: turchin 02 October 2017 03:16:32PM 2 points [-]

However, "AI accidents" don't communicate the scale of a possible disaster. Something like "global catastrophic AI accidents" may be even clearer. Or "permanent loss of control of a hostile AI system".

Comment author: turchin 28 August 2017 10:47:15PM 2 points [-]

I think the title may be technically correct but sounds nasty.

For nitpicking side, I would argue that AI weapons problem mostly depends on their level of intelligence. If it is just narrow AI -ok. However, the greater is their intelligence, the greater is the danger and it may reach catastrophic levels before superintelligence will be created.

I would also add that superintelligence created by the military may be perfectly aligned, but still catastrophically dangerous if it is used as a universal weapon against perhaps another military superintelligence. And the first step for not creating military superintelligence - starts from non creating AI weapons.

Comment author: FeepingCreature 30 June 2017 11:42:36AM 3 points [-]

But of course, I cannot justify high confidence in these views given that many experts disagree. Following the analysis of this post, this is

Dangling sentence.

In my personal belief, the "hard AI takeoff" scenarios are driven mostly by the belief that current AI progress largely flows from a single skill, that is, "mathematics/programming". So while AI will continue to develop at disparate rates and achieve superhuman performance in different areas at different rates, an ASI takeoff will be driven almost entirely by AI performance in software development, and once AI becomes superhuman in this skill it will rapidly become superhuman in all skills. This seems obvious to me, and I think disagreements with it have to rest largely with hidden difficulties in "software development", such as understanding and modeling many different systems well enough to develop algorithms specialized for them (which seems like it's almost circularly "AGI complete").

Comment author: turchin 01 July 2017 05:56:33PM 1 point [-]

Does it mean that we could try to control AI by preventing its to know anything about programming?

And on the other side, any AI which is able to write code should be regarded extremely dangerous, no matter how low its abilities in other domains?

Comment author: Daniel_Eth 07 April 2017 12:25:21AM 0 points [-]

Looks like good work! My biggest question is how would you get people to actually do this? I'd imagine there are a lot of people that would want to go to Mars since that seems like a great adventure, but living in a submarine in case there's a catastrophe isn't something that I think would appeal to many people, nor is funding the project.

Comment author: turchin 07 April 2017 07:11:11AM 3 points [-]

If we promise that people who want to go to Mars have to serve a year on a refuge-submarine, there will be a lot of volonteers - and we could choose the best.

Or we could collect the crews the same way as military crews are collected - combining prestige and salary.

Comment author: JacobLBryan 06 April 2017 10:57:28AM 0 points [-]

When you get to scenario three where a nuclear submarine is operating under a private non-governmental organization I have to wonder about precedent for governments allowing fissile material into private control, especially absent a lot of the governmental controls that existing power plants have in place.

(You have a typo in figure 1, years not tears.)

Comment author: turchin 06 April 2017 12:34:47PM 2 points [-]

Thanks for typo hint!

I think that they mostly should be operating under general government control. There are also several private companies which licensed to make nuclear power plants etc like Westinghouse, and the same companies could operate nuclear powered ships and submarines.

14

Surviving Global Catastrophe in Nuclear Submarines as Refuges

Our article about using nuclear submarines as refuges in case of a global catastrophe has been accepted for the Futures  journal and its preprint is available online. Preventing global risks or surviving them is good application of EA efforts. Converting existing nuclear submarines into refuges may be cheap intervention with high... Read More
Comment author: MikeJohnson 09 December 2016 05:50:20PM *  4 points [-]

Yes, it would be quite civilizationally embarrassing to accidently p-zombie ourselves... more generally, it seems valuable to understand tradeoffs in consciousness. This seems to be an important component in any far-future planning.

Also- Andres has done some interesting exploratory work re: defining the problem of future drug epidemics, and discussing game-theoretic considerations.

Comment author: turchin 09 December 2016 07:36:47PM 1 point [-]

If we make an AGI which doesn't have qualia, it will probably will prove that no such thing exist and proceed with p-zombie us.

So in may be better to pursued the way to AGI which probably will provide it with qualia, and one such way is human upgrade

View more: Next