Comment author: SiebeRozendal 15 March 2018 12:23:58PM *  1 point [-]

This is a fascinating question! However, I think you are making a mistake in estimating the lower bound: The fact that chimps are removed by 7 million years of evolution (Wikipedia says 4-13 million) rests on the assumptions that:

  • Chimpanzees needed these 7 million years to evolve to their current level of intelligence. Instead, their evolution could have contained multiple intervals of random length with no changes to intelligence. This implies that chimpanzees could have evolved from our common ancestor to their current level of intelligence much faster or much slower than 7 million years.

  • The time since our divergence with chimpanzees is indicative of how long it takes from their level of intelligence to ours. I am not quite sure what to think of this. I assume your reasoning is "it took us 7 million years to evolve to our current level of intelligence from the common ancestor, and chimpanzees probably did not lose intelligence in those 7 million years, so the starting conditions are at least as favorable as they were 7 million years ago." This might be right. On the other hand, evolutionary paths are difficult to understand and maybe chimps developed in some way that makes it unlikely to evolve into a technologically advanced society. Nonetheless, this doesn't seem the case because they do show traits beneficial to evolution of higher intelligence, e.g. tool use, social structure, and eating meat. All in all, thinking about this I keep coming back to the question: how contingent is evolution instead of directional when we look at intellectual and social capability? There seems to be disagreement here in the field of evolutionary biology, even though there are many different evolutionary branches where intelligence evolved and increased.

Also, you have given the time periods when a next civilisation might arise if it arises, but how likely do you think that it arises?

Comment author: turchin 15 March 2018 04:55:48PM 1 point [-]

Surely, 7 million years estimation has big uncertainty, and it could be shorter, but unlikely shorter than 1 million year, as chimps have to undergo important anatomic changes to become human-like: they need to have larger heads, different walking and hanging anatomy, different voice anatomy etc, and selection for such anatomic changes was slow in humans. Also, most catastrophes which will kill humans will probably kill chimps too, as they are already endangered species in many locations, and orangutangs are on the brink of extinction in natural habitats.

However, there is another option for the quick evolution of intelligence after humans, that is domesticated animals, firstly dogs. They have been selected for many human-like traits, including understanding voice commands.

Chimps in zoos also were trained to speak some rudimentary forms of gesture language and trained their children to do so. If they preserve these skills, they could evolve much quicker.

Comment author: brianwang712 04 March 2018 04:58:46PM 7 points [-]

I'd like to hear more about your estimate that another non-human civilization may appear on Earth on the order of 100 million years from now; is this mostly based on the fact that our civilization took ~100 million years to spring up from the first primates?

If there is a high probability of another non-human species with moral value reaching our level of technological capacity on Earth in ~100 million years conditional on our own extinction, then this could lessen the expected "badness" of x-risks in general, and could also have implications for the prioritization of the reduction of some x-risks over others (e.g., risks from superintelligent AI vs. risks from pandemics). The magnitudes of these implications remain unclear to me, though.

Comment author: turchin 04 March 2018 09:24:44PM 1 point [-]

Basically, there are two constraints on the timing of the new civilization, which are explored in details in the article:

1) As closest our relative are chimps with 7 million genetic difference from us, human extinction means that at least 7 million years there will be no other civilization, and likely more, as most causes of human extinction would kill great apes too. 2) Life on Earth will be possible approximately next 600 mln years based on the Earth and Sun models.

Thus the next civilization timing is between 7 and 600 mln years, but the probability peaks closer to 100 mln years, as it is time needed for the evolution of primates "again" from the "rodents", and it will later decline as the conditions on the planet will deteriorate.

We explored the difference between human extinction risks and l-risks, that is life extinction risk in another article:

In it, we show that life extinction is worse than human extinction, and universe destruction is even worse than life extinction, and this should be taken into account in risk prevention prioritisation.


[Paper] Surviving global risks through the preservation of humanity's data on the Moon

My, with David Denkenberger, article about surviving global risks through the preservation of the data on the Moon has been accepted in Acta Astronautica. Such data preservation is similar to the digital immortality with the hope that next civilization on Earth will return humans to life. I also call this... Read More
Comment author: MichaelPlant 14 January 2018 06:21:37PM 2 points [-]

This seems like a good project and I found the 2-axis picture helpful. The only bit that stood out was global warming. I'm not sure how you're defining it but my sense is that global warming of some sort seems pretty likely to be a problem in the next 100 years. If you mean a particularly severe form of global warming, it might help to have a more expressive term like "runaway climate change" or "severe climate change" and possible also a term for a more moderate form that appears in another box.

Comment author: turchin 14 January 2018 10:05:30PM 1 point [-]

Surely, there are two types of global warming.

I think that risks of runaway global warming are underestimated, but there is very small scientific literature to support the idea.

If we take accumulated tall from smaller effects of the long-term global warming of 2-6C, it could be easily calculated as a very larger number, but to be regarded as a global catastrophe, it probably should be more like a one-time event, or many other things will be also a global catastrophe, like cancer etc.

Comment author: Khorton 14 January 2018 01:51:43PM 0 points [-]

Also, the image above indicates AI would likely destroy all life on earth, not only human life.

Comment author: turchin 14 January 2018 02:20:48PM *  1 point [-]

In the article AI is destroying all life on earth, but on the previous version of the image in this blog post the image was somewhat redesign to better visibility and AI risk jumped to the kill all humans. I corrected the image now, so it is the same as in the article, - so the previous comment was valid.

Will the AI be able to destroy other civilizations in the universe depends on the fact if these civilizations will create their own AI before intelligence explosion wave from us arrive to them.

So AI will kill only potential and young civilizations in the universe, but not mature civilizations.

But it is not the case for false vacuum decay wave which will kill everything (according to our current understanding of AI and vacuum).

Comment author: RyanCarey 14 January 2018 12:01:42PM *  2 points [-]

I haven't read the whole paper yet, so forgive me if I miss some of the major points by just commenting on this post.

The image seems to imply that non-aligned AI would only extinguish human life on Earth. How do you figure that? It seems that an AI could extinguish all the rest of life on Earth too, even including itself in the process. [edit: this has since been corrected in the blog post]

For example, you could have an AI system that has the objective of performing some task X, before time Y, without leaving Earth, and then harvests all locally available resources in order to perform that task, before eventually running out of energy and switching off. This would seem to extinguish all life on Earth by any definition.

We could also discuss whether AI might extinguish all civilizations in the visible universe. This also seems possible. One reason for this is that humans might be the only civilization in the universe.

Comment author: turchin 14 January 2018 12:26:40PM 1 point [-]

No, in the paper we clearly said that non-alaigned AI is the risk to the whole universe in the worst case scenario.


[Paper] Global Catastrophic and Existential Risks Communication Scale, similar to Torino scale

  We (Alexey Turchin and David Denkenberger) have a new paper out where we suggest a scale to communicate the size of global catastrophic and existential risks. For impact risks, we have the Torino scale  of asteroid danger which has five color-coded levels. For hurricanes we have the Saffir-Simpson scale... Read More

[Paper]: Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence

There are two views on the best strategy among transhumanists and rationalists: The first involves the belief that one must invest in life extension technologies, and the latter, that it is necessary to create an aligned AI that will solve all problems, including giving us immortality or even something better.... Read More
Comment author: Liam_Donovan 01 December 2017 01:41:58PM *  1 point [-]

Wouldn't this be an issue with or without an explanation? It seems like an AI can reasonably infer from other actions humans in general, or Alexey in particular, take that they are highly motivated to argue against being exterminated. IDK if I'm missing something obvious -- I don't know much about AI safety.

Comment author: turchin 02 December 2017 12:41:15PM 0 points [-]

One point you are missing is that the text is addressed to the Young AI - which is above human level, but is still self-improving and is not omnipotent superintelligence.

Also, it is ok if AI will be able to come to the arguments which I suggested on its own - as it will demonstrate to the AI that the utility of human atoms is extremely small compared to the also small, but the positive utility of human preservation. If AI understands that preserving humans is instrumentally useful without reading the text - it's ok.

Comment author: Liam_Donovan 01 December 2017 01:59:34PM 0 points [-]

This doesn't make sense either: for example, your questions could be selected in a biased manner to manipulate the AI, and you could be being disingenuous when dealmaking. Generally, it seems like good epistemic practice to discount arguments of any form, including questions, when the person making them is existentially biased towards one side of the discussion

Comment author: turchin 02 December 2017 12:37:00PM 0 points [-]

I think that it is a good epistemic practice not to discount arguments without analysing them.

The fact that I am very interested in one outcome may help my to find very powerful arguments, for example, in AI case, it is my pre-commitment to ask any future benevolent AI to run many simulations of any Rogue AIs. The idea was suggested by R.Nelson in 2007. Such pre-commitment is not information which AI may be able to create itself, but it informs it about the actual situation in the outside world, which could affect the total number of paperclips it will be able to create.

View more: Next