My, with David Denkenberger, article about surviving global risks through the preservation of the data on the Moon has been accepted in Acta Astronautica. Such data preservation is similar to the digital immortality with the hope that next civilization on Earth will return humans to life.

I also call this "plan С" of x-risks prevention, where plan A is stopping global catastrophe and plan B is surviving it in a refuge. The plan B was already covered in another my article about Aquatic Refuges (that is nuclear submarines), published in Futures. 

Plan C could be done rather cost-effectively by adding some eternal data carriers to many planned space crafts like Arch Mission is planning to do.

Link: https://www.sciencedirect.com/science/article/pii/S009457651830119X 

The article is behind the paywall but preprint is here: https://philpapers.org/rec/TURSGR

 

Abstract: Many global catastrophic risks are threatening human civilization, and a number of ideas have been suggested for preventing or surviving them. However, if these interventions fail, society could preserve information about the human race and human DNA samples in the hopes that the next civilization on Earth will be able to reconstruct of Homo Sapience and our culture. This requires information preservation of an order of magnitude of 100 million years, a little-explored topic thus far. It is important that a potential future civilization will discover this information as early as possible, thus a beacon should accompany the message in order to increase visibility. The message should ideally contain information about how humanity was destroyed, perhaps including a continuous recording until the end. This could help the potential future civilization to survive. The best place for long-term data storage is under the surface of the Moon, with the beacon constructed as a complex geometric figure drawn by small craters or trenches around a central point. There are several cost-effective options for sending the message as opportunistic payloads on different planned landers.

 

Keywords: Global catastrophic risks, existential risks, moon, time-capsule, METI

 

Highlights:     

  • Catastrophic risks could be survived if the next civilization on Earth will reconstruct humanity. 
  • The next non-human civilization may appear on Earth around 100 million years from now.
  • Time-capsules with DNA and data could help in the reconstruction of humanity.
  • The most logical place for such data preservation is the Moon. 
  • Drawings on the surface of the Moon made of small craters can serve as a beacon.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

11

0
0

Reactions

0
0
Comments6
Sorted by Click to highlight new comments since: Today at 9:23 PM

I'd like to hear more about your estimate that another non-human civilization may appear on Earth on the order of 100 million years from now; is this mostly based on the fact that our civilization took ~100 million years to spring up from the first primates?

If there is a high probability of another non-human species with moral value reaching our level of technological capacity on Earth in ~100 million years conditional on our own extinction, then this could lessen the expected "badness" of x-risks in general, and could also have implications for the prioritization of the reduction of some x-risks over others (e.g., risks from superintelligent AI vs. risks from pandemics). The magnitudes of these implications remain unclear to me, though.

Basically, there are two constraints on the timing of the new civilization, which are explored in details in the article:

1) As closest our relative are chimps with 7 million genetic difference from us, human extinction means that at least 7 million years there will be no other civilization, and likely more, as most causes of human extinction would kill great apes too. 2) Life on Earth will be possible approximately next 600 mln years based on the Earth and Sun models.

Thus the next civilization timing is between 7 and 600 mln years, but the probability peaks closer to 100 mln years, as it is time needed for the evolution of primates "again" from the "rodents", and it will later decline as the conditions on the planet will deteriorate.

We explored the difference between human extinction risks and l-risks, that is life extinction risk in another article: http://effective-altruism.com/ea/1jm/paper_global_catastrophic_and_existential_risks/

In it, we show that life extinction is worse than human extinction, and universe destruction is even worse than life extinction, and this should be taken into account in risk prevention prioritisation.

This is a fascinating question! However, I think you are making a mistake in estimating the lower bound: The fact that chimps are removed by 7 million years of evolution (Wikipedia says 4-13 million) rests on the assumptions that:

  • Chimpanzees needed these 7 million years to evolve to their current level of intelligence. Instead, their evolution could have contained multiple intervals of random length with no changes to intelligence. This implies that chimpanzees could have evolved from our common ancestor to their current level of intelligence much faster or much slower than 7 million years.

  • The time since our divergence with chimpanzees is indicative of how long it takes from their level of intelligence to ours. I am not quite sure what to think of this. I assume your reasoning is "it took us 7 million years to evolve to our current level of intelligence from the common ancestor, and chimpanzees probably did not lose intelligence in those 7 million years, so the starting conditions are at least as favorable as they were 7 million years ago." This might be right. On the other hand, evolutionary paths are difficult to understand and maybe chimps developed in some way that makes it unlikely to evolve into a technologically advanced society. Nonetheless, this doesn't seem the case because they do show traits beneficial to evolution of higher intelligence, e.g. tool use, social structure, and eating meat. All in all, thinking about this I keep coming back to the question: how contingent is evolution instead of directional when we look at intellectual and social capability? There seems to be disagreement here in the field of evolutionary biology, even though there are many different evolutionary branches where intelligence evolved and increased.

Also, you have given the time periods when a next civilisation might arise if it arises, but how likely do you think that it arises?

Surely, 7 million years estimation has big uncertainty, and it could be shorter, but unlikely shorter than 1 million year, as chimps have to undergo important anatomic changes to become human-like: they need to have larger heads, different walking and hanging anatomy, different voice anatomy etc, and selection for such anatomic changes was slow in humans. Also, most catastrophes which will kill humans will probably kill chimps too, as they are already endangered species in many locations, and orangutangs are on the brink of extinction in natural habitats.

However, there is another option for the quick evolution of intelligence after humans, that is domesticated animals, firstly dogs. They have been selected for many human-like traits, including understanding voice commands.

Chimps in zoos also were trained to speak some rudimentary forms of gesture language and trained their children to do so. If they preserve these skills, they could evolve much quicker.

I am wondering why you say that "Human reconstruction will be beneficial to the next civilization."

I think it would be great if we could leave messages to a future non-human civilization to help them achieve a grand future and reduce their x-risk (by learning from our mistakes, for example). But I don't feel that human reconstruction is particularly important.

If anything, I worry that this future advanced civilization might reconstruct humans to enslave us. And if they are not the type to enslave us, then I feel pretty good about them existing and homo sapiens not existing.

If they advance enough to reconstruct us, then most of bad enslavement ways are likely not interesting to them. For example, we no try to reconstruct mammoths in order to improve climate in Siberia, but not for hunting or meet.