Comment author: MichaelPlant 14 January 2018 06:21:37PM 1 point [-]

This seems like a good project and I found the 2-axis picture helpful. The only bit that stood out was global warming. I'm not sure how you're defining it but my sense is that global warming of some sort seems pretty likely to be a problem in the next 100 years. If you mean a particularly severe form of global warming, it might help to have a more expressive term like "runaway climate change" or "severe climate change" and possible also a term for a more moderate form that appears in another box.

Comment author: turchin 14 January 2018 10:05:30PM 0 points [-]

Surely, there are two types of global warming.

I think that risks of runaway global warming are underestimated, but there is very small scientific literature to support the idea.

If we take accumulated tall from smaller effects of the long-term global warming of 2-6C, it could be easily calculated as a very larger number, but to be regarded as a global catastrophe, it probably should be more like a one-time event, or many other things will be also a global catastrophe, like cancer etc.

Comment author: Khorton 14 January 2018 01:51:43PM 0 points [-]

Also, the image above indicates AI would likely destroy all life on earth, not only human life.

Comment author: turchin 14 January 2018 02:20:48PM *  1 point [-]

In the article AI is destroying all life on earth, but on the previous version of the image in this blog post the image was somewhat redesign to better visibility and AI risk jumped to the kill all humans. I corrected the image now, so it is the same as in the article, - so the previous comment was valid.

Will the AI be able to destroy other civilizations in the universe depends on the fact if these civilizations will create their own AI before intelligence explosion wave from us arrive to them.

So AI will kill only potential and young civilizations in the universe, but not mature civilizations.

But it is not the case for false vacuum decay wave which will kill everything (according to our current understanding of AI and vacuum).

Comment author: RyanCarey 14 January 2018 12:01:42PM *  2 points [-]

I haven't read the whole paper yet, so forgive me if I miss some of the major points by just commenting on this post.

The image seems to imply that non-aligned AI would only extinguish human life on Earth. How do you figure that? It seems that an AI could extinguish all the rest of life on Earth too, even including itself in the process. [edit: this has since been corrected in the blog post]

For example, you could have an AI system that has the objective of performing some task X, before time Y, without leaving Earth, and then harvests all locally available resources in order to perform that task, before eventually running out of energy and switching off. This would seem to extinguish all life on Earth by any definition.

We could also discuss whether AI might extinguish all civilizations in the visible universe. This also seems possible. One reason for this is that humans might be the only civilization in the universe.

Comment author: turchin 14 January 2018 12:26:40PM 1 point [-]

No, in the paper we clearly said that non-alaigned AI is the risk to the whole universe in the worst case scenario.

Comment author: Liam_Donovan 01 December 2017 01:41:58PM *  1 point [-]

Wouldn't this be an issue with or without an explanation? It seems like an AI can reasonably infer from other actions humans in general, or Alexey in particular, take that they are highly motivated to argue against being exterminated. IDK if I'm missing something obvious -- I don't know much about AI safety.

Comment author: turchin 02 December 2017 12:41:15PM 0 points [-]

One point you are missing is that the text is addressed to the Young AI - which is above human level, but is still self-improving and is not omnipotent superintelligence.

Also, it is ok if AI will be able to come to the arguments which I suggested on its own - as it will demonstrate to the AI that the utility of human atoms is extremely small compared to the also small, but the positive utility of human preservation. If AI understands that preserving humans is instrumentally useful without reading the text - it's ok.

Comment author: Liam_Donovan 01 December 2017 01:59:34PM 0 points [-]

This doesn't make sense either: for example, your questions could be selected in a biased manner to manipulate the AI, and you could be being disingenuous when dealmaking. Generally, it seems like good epistemic practice to discount arguments of any form, including questions, when the person making them is existentially biased towards one side of the discussion

Comment author: turchin 02 December 2017 12:37:00PM 0 points [-]

I think that it is a good epistemic practice not to discount arguments without analysing them.

The fact that I am very interested in one outcome may help my to find very powerful arguments, for example, in AI case, it is my pre-commitment to ask any future benevolent AI to run many simulations of any Rogue AIs. The idea was suggested by R.Nelson in 2007. Such pre-commitment is not information which AI may be able to create itself, but it informs it about the actual situation in the outside world, which could affect the total number of paperclips it will be able to create.

Comment author: Lila 29 November 2017 12:08:56AM 0 points [-]

You reveal that you are highly motivated to argue that exterminating humanity is not in the interest of an AI, regardless of whether that statement is true. So your arguments will present weak evidence at best, given your clear bias.

Comment author: turchin 29 November 2017 03:52:30PM 0 points [-]

There are types of arguments which doesn't depend on my motivation, like "deals" and "questions".

For example, if I say "I will sell you 10 paperclips if you will not kill me", - in that case, my motivation is an evidence that I will stick to my side of the deal.

Comment author: Lila 27 November 2017 05:36:50PM 0 points [-]

Is the ai supposed to read this explanation? Seems like it tips your hand?

Comment author: turchin 28 November 2017 09:30:07AM 0 points [-]

Yes, I expect that future AI will read the text.

Not sure what you mean about "tips your hand", as English is not my first language.

Comment author: turchin 01 November 2017 09:12:46PM *  0 points [-]

Thanks again for the interesting post. After rereading I have some more thoughts on the topic.

I would add that LAW is not the same as Military AI, and LAW as the safest part of the military AI. M.Maas showed that Military AI consists of the several layers, where LAWs are on the lowest.

An advanced Military AI will probably include several other functions (some already exist):

1.Strategic planning of winning in war

2.Direct control of all units inside the country's defence systems, which may include drones, ships, nuclear weapons, humans, and other large and small units

3Nuclear deterrence part, which consists of the early warning system and dead hand second strike system.

4Manufacturing and constructing new advanced weapons

5Cyberweapons, that is instruments "to elect Trump" or to turn off adversaries' AI or other critical infrastructure.

Each of this 5 levels could have a global catastrophic failure, even without starting uncontrollable self-improving.

1.Strategic planning may have superhuman winning ability (think about AlphaGo Zero, but used as general) or could have a failure if it suggests "to strike first now or lose forever",

2 Global army controlling system could propagate a wrong command.

3 The Early warning system could create false alarm (had happened before). there also could be flash-crash stile unexpected war between two Military AIs of two adversarial nation states.

4Weapons manufacturing AI may be unexpectedly effective in creating very dangerous weapons, which later will be used with global consequences, more severe than nuclear war.

5Use of cyberweapons also may be regarded as an act of war or help to elect a dangerously unstable president (some think that this already happened with DT). Cyberwar may also affect other's side critical infrastructure or rewrite other's side AI goal function, which is bad outcomes.

Comment author: turchin 31 October 2017 10:16:02PM 1 point [-]

I am a great fan of the bacteria which will be able to convert methane into food. Basically, it would balance two markets which both suffer sometimes of oversupply or undersupply: fossil fuels market and food market. If we will be able to move the excess of supply in one of them into another, both will be more stable. It would support also fight with climate change, as natural gas is 4-5 times cleaner than coal.

I also think that methane producers will be happy to pay for such bridge as it will be a new market for them, and in that case, we will reach the negative cost of the earth-saving intervention. Negative cost should be a gold standard for EA creativity, as in that case we don't need to collect money, but just suggest an idea - and some entertainers will pay for it. However, some initial investment in the analysing and promoting the idea are needed.

Comment author: turchin 02 October 2017 03:16:32PM 2 points [-]

However, "AI accidents" don't communicate the scale of a possible disaster. Something like "global catastrophic AI accidents" may be even clearer. Or "permanent loss of control of a hostile AI system".

View more: Next