RyanCarey comments on [Paper] Global Catastrophic and Existential Risks Communication Scale, similar to Torino scale - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (8)

You are viewing a single comment's thread.

Comment author: RyanCarey 14 January 2018 12:01:42PM *  2 points [-]

I haven't read the whole paper yet, so forgive me if I miss some of the major points by just commenting on this post.

The image seems to imply that non-aligned AI would only extinguish human life on Earth. How do you figure that? It seems that an AI could extinguish all the rest of life on Earth too, even including itself in the process. [edit: this has since been corrected in the blog post]

For example, you could have an AI system that has the objective of performing some task X, before time Y, without leaving Earth, and then harvests all locally available resources in order to perform that task, before eventually running out of energy and switching off. This would seem to extinguish all life on Earth by any definition.

We could also discuss whether AI might extinguish all civilizations in the visible universe. This also seems possible. One reason for this is that humans might be the only civilization in the universe.

Comment author: Denkenberger 14 January 2018 02:16:40PM *  5 points [-]

It is hard to encapsulate this all into a simple scale, but we wanted to recognize that false vacuum decay that would destroy the Universe at light speed would be worse than bad AI, at least if you think the future will be net positive. Bad AI could be constrained by a more powerful civilization.

Comment author: turchin 14 January 2018 12:26:40PM 1 point [-]

No, in the paper we clearly said that non-alaigned AI is the risk to the whole universe in the worst case scenario.

Comment author: Khorton 14 January 2018 01:51:43PM 0 points [-]

Also, the image above indicates AI would likely destroy all life on earth, not only human life.

Comment author: turchin 14 January 2018 02:20:48PM *  1 point [-]

In the article AI is destroying all life on earth, but on the previous version of the image in this blog post the image was somewhat redesign to better visibility and AI risk jumped to the kill all humans. I corrected the image now, so it is the same as in the article, - so the previous comment was valid.

Will the AI be able to destroy other civilizations in the universe depends on the fact if these civilizations will create their own AI before intelligence explosion wave from us arrive to them.

So AI will kill only potential and young civilizations in the universe, but not mature civilizations.

But it is not the case for false vacuum decay wave which will kill everything (according to our current understanding of AI and vacuum).