Hide table of contents

Abstract: During the last few years a significant number of global risks have been discovered that threaten human existence. These include, to name but a few, the risk of harmful AI, the risk of genetically modified viruses and bacteria, the risk of uncontrollable nanorobots-replicators, the risk of a nuclear war and irreversible global warming. Additionally, dozens of other less probable risks have been identified. Also a number of ideas have been conveyed regarding the prevention of these risks, and various authors have campaigned for different ideas.

This roadmap compiles and arranges a full list of methods to prevent global risks. The roadmap describes plans of action A, B, C and D each of which will go into effect if the preceding one fails.

Plan A is prevent global risks, it combines 5 parallel approaches: international control, decentralized monitoring, friendly AI, rising resilience and space colonization.

Plan B is to survive the catastrophe.

Plan C is to leave traces.

Plan D is improbable ideas.

Bad plans are plans that raise the risks.

The document exists in two forms: as a visual map (pdf http://immortality-roadmap.com/globriskeng.pdf) and as a text (long read 50 pages, http://docdro.id/8CBnZ6g).

 

Introduction

The problem

Many authors noted that in the 21th century may witness a global catastrophe caused by new technologies (Joy, Rees, Bostrom, Yudkowsky, etc).

Many of them suggested different ways of x-risks prevention (Joy, Posner, Bostrom, Musk, Yudkowsky).

But these ideas are disseminated in literature and unstructured, so we need to collect all of them, put them in a most logical order and evaluate their feasibility.

As a result, we will get a most comprehensive and useful plan of x-risks prevention that may be used by individuals and policymakers.

In order to achieve this goal I created a map of x-risks prevention methods.

The map contains all known ways to prevent global risks, most of which you may have probably heard of separately.

The map describes the action plans A, B, C, D, each of which will come into force in the event of the failure of a previous one. The plans are plotted vertically from top to bottom. The horizontal axis represent timeline and some approximate dates when certain events on the map may occur.

The size of this explanatory text is limited by the size of the article, so many points on the map I left as self-evident or linked them to explanations by other authors. A full description of every point would take up a whole book.

The context

The context of the map is an exponential model for the future. The map is based on the model of the world in which the main driving force of history is the exponential development of technology, and in which a strong artificial intelligence will have been created around 2050. This model is similar to the Kurzweil model, although the latter suffers from hyper-optimistic bias and does not take account of global risks.

This model is relatively cautious as compared to other exponential models, for example, there are models where technology development takes place according to a hyperbolic law and there is a singularity around 2030 (Scoones, Vinge, Panov, partly Forester).

At the same time we must understand that this model is not a description of reality, but a map of a territory, that is, in fact, we do not know what will happen, and very serious deviations are possible because of black swan events or through slower technological growth.

I should note that there are two other main models – the standard model, in which future will be almost as today with a slow linear growth (this model is used by default in economic and political forecasting, and it is quite good over intervals of 5-10 years) and the model of Rome Club, according to which in the middle of the 21st century there will be a sharp decline in production, economy and population. Finally, there is the model of Taleb (and Stanislaw Lem), in which the future is determined by unpredictable events.

In fact, we don’t have a good plan

The situation is that in fact we do not have a good plan, because each plan has its own risks, and besides, we do not know how these plans could be implemented.

That is, although there is a large map of risk prevention plans, the situation of prevention does not look good. It is easy to criticize each of the proposed plans as unrealizable and dangerous, and I will show their risks. Such criticism is necessary for improving the existing plans.

But some plan is better than no plan at all.

Firstly, we can build on it to create an even better plan.

Secondly, the mere implementation of this plan will help delay a global catastrophe or reduce its likelihood. Without it, the probability of a global catastrophe is estimated by different scientists at 50 per cent before the end of the 21st century.

I hope the implementation of a most effective x-risks prevention plan will lower it by order of magnitude.

Overview of the map

Plan A “Prevent the catastrophe” is composed of four sub-options: A1, A2, A3 and A4. These sub-options may be implemented in parallel, at least up to a point.

The idea of plan A is to completely avoid the global catastrophe and to achieve such a state of civilization, that its probability is negligible. The sub-options are following:

·     Plan A1 is the creation of a global monitoring system. It includes two options: A1.1 –international centralized control and A1.2 – decentralized risk monitoring. The first option is based on suppression, the second is co-operative. The second option emerged during crowdsourcing ideas for the map in summer 2015.

·     Plan A2 is the creation of Friendly AI,

·     Plan A3 is increasing resilience and indestructibility

·     Plan A4 is space colonization,

Among them the strongest are the first two plans and in practice they will merge: that is, the government will be computerized, and AI will take over the functions of the world government.

·     Plan B is about building shelters and bunkers to survive the catastrophe,

·     Plan C is to leave traces of information for future civilizations.

·     Plan D is hypothetical plans

·     “Bad plans” are dangerous plans that are not worth implementing.

The procedure for implementing the plans

In order to build a multi-level protection against global risks, we should implement almost all of the good plans. At early stages, most plans are not mutually exclusive.

The main problem that can make them begin to exclude each other, arises in connection with the question of who will control the Earth globally: a super UN, AI, a union of strong nations, a genius hacker, one country, or a decentralized civil risk monitoring system. This question is so serious that in itself is a major global risk, as there are many entities eager to take power over the world.

The ability to implement all the listed plans depends on the availability of sufficient resources. Actually, the proposed map is a map of all possible plans, from which one may choose for implementation one most suitable sub-group.

If resources are insufficient, it may make sense to focus on one plan only. But who will choose?

So here arises a question of actors: who exactly would implement these plans? Currently, in the world there are many independent actors, and some of them have their own plans to prevent a global catastrophe. For example, Elon Musk proposes to create a safe AI and build a colony on Mars, and such plans could de realized by one person.

As a result, different actors will cover the whole range of possible plans, acting independently, each with his own vision of how to save the world.

Although any of the plans is suitable to prevent all possible accidents, one particular plan is most efficient for a certain type of disasters. For example, Plan A1 (international control system) is best suited to control the spread of nuclear, chemical, biological weapons and anti-asteroid protection whereas Plan A2 is the best to prevent the creation of an unfriendly AI.

Space exploration is better suited to protect against asteroids but does very little to protect against an unfriendly AI that can be distributed via communication lines, or against interplanetary nuclear missiles.

The probability of success of the plans

Maps are also arranged in order of the likelihood of the success of their implementation. In all cases, however, it is not very large. I will give my evaluation of the probability of success of the plans, highest to lowest:

Most likely is the success of the international control system A1.1, because it requires no fundamental technological or social solutions that would not have been known in the past. 10 percent. (This is my estimation of the probability that the realization of this plan will prevent a global catastrophe on the condition that no other plan has been implemented, and that the catastrophe is inevitable if no prevention plans exist at all. The notion of probability for x-risks is complicated and will be discussed in a separate paper and map “Probability of x-risks”.) The main factors lowering its probability are the well-known human inability to unite, risks of world war during attempts to unite humanity forcefully and risks of failure of any centralized system.

Decentralized control in A1.2 is based on new social forms of management that are a little bit utopian so its success probability is also not very high and I estimate it at 10 percent.

Creating artificial intelligence (A2) requires the assumption that AI is possible, and this plan carries its own risks, and also AI is not able to prevent the risk of accidents that can happen before its creation, such as a nuclear war or a genetically engineered virus – 10 percent.

A3: Increasing resilience and strengthening the infrastructure can have only a marginal effect in most scenarios as help in realization of other plans, so – 1 percent.

A4: Space colonization does not protect from radio-controlled missiles, nor from the a hostile AI, or even from the slow action of biological weapons which work like AIDS. Besides, space colonization is not possible in the near future, and it creates new risks: large space ships could be kinetic weapons or suffer catastrophic accidents during lunch, so – 1 percent.

Plan B is obviously less likely to succeed, since major shelters could be easily destroyed and are expensive to build, and small shelters are vulnerable. In addition, we do not know from what type of future disasters we are going to protect ourselves building shelters. So, 1 per cent.

Plans C and D have almost symbolic chances for success. 0,001 per cent.

Bad plans will increase the likelihood of a global catastrophe.

We could hope for positive integration of different plans. For example, Plan A1 is good at early stages before the creation of a strong AI, and Plan A2 is a strong AI itself. Plan A3 will help implement all other plans. And plans A4, B and C may have strong promotional value to raise awareness of x-risks.

In the next chapters I will explain different blocks of the map.

Steps

Timeline of the map consist not only of possible dates which could move by decades depending on the speed of progress and other events, but also of steps, which are almost the same for every plan.

Step 1 is about understanding the nature of risks and creating a theory.

Step 2 is about preparation, which includes promoting an idea, funding and building an infrastructure needed for risks mitigation. Step 2 can’t be done successfully without Step 1.

Step 3 is the implementation of preventive measures on a low technological level, that is on the current level of technologies. Such measures are more realistic (bans, video surveillance) but also limited in scope

Step 4 is the implementation of advanced measures based on future technologies which will finally close most risks, but which themselves may pose their own risks.

Step 5 is the final state where our civilization will attain indestructibility.

These steps are best suited to Plan A1.1 (international control system) but are needed for all the plans.

 

Continue reading for the full description of the map here - http://docdro.id/8CBnZ6g

Comments3
Sorted by Click to highlight new comments since: Today at 10:54 AM

This is a great piece of work – very comprehensive. Have you reviewed Leggett 2006? I would add in your "improving sustainability of civilization" section alternative foods. They are much cheaper than building up food stocks.

Yes, I will add.

I even have some ad hoc ideas who to do it. 1) Converting oil in eatable fats - German did it after WW2 2) Grow worms inside piece of soil 3) Chlorella 4) Potatoes - if all territory of Russia would be used to grow it, it would feed 30 billion people. 5) Converting celulose into glucose bacteria 6) https://en.wikipedia.org/wiki/Pinus_sibirica - it has eatable nuts and total mass of them is very large, as this tree covers millions of square kilometers in taiga.

Thanks. 1) Please provide a reference-I had searched converting petroleum earlier and did not find anything. 2) In the book, we found that more of the calories would go to non-food organisms using worms than use other options like cellulose digesting beetles. 3) Artificial light is extremely inefficient, so this would only be feasible in the partial sun blocking scenarios. 4) If the climate is 10°C cooler because of nuclear winter, maybe the potatoes would work in the tropics. But the question is whether they could handle the high UV caused by the destruction of the ozone layer. 5) I'm not sure exactly what you mean here. But we did look at chemical methods of converting cellulose into sugar, which are currently used to produce biofuels. We also looked at eating bacteria directly that grew on cellulose, but it is not appetizing and you would need to have low fiber for it to even produce net calories. We also considered the possibility of leaching sugar out of material the bacteria was growing on, but this needs more investigation. 6) In the case of the sun being blocked, these trees would die, but it could give us some temporary food.