Hide table of contents

People trying to guard civilisation against catastrophe usually focus on one specific kind of catastrophe at a time. This can be useful for building concrete knowledge with some certainty in order for others to build on it. However, there are disadvantages to this catastrophe-specific approach:

1. Catastrophe researchers (including Anders Sandberg and Nick Bostrom) think that there are substantial risks from catastrophes that have not yet been anticipated. Resilience-boosting measures may mitigate risks that have not yet been investigated.

2. Thinking about resilience measures in general may suggest new mitigation ideas that were missed by the catastrophe-specific approach.

One analogy for this is that an intrusion (or hack) to a software system can arise from a combination of many minor security failures, each of which might appear innocuous in isolation. You can decrease the chance of an intrusion by adding extra security measures, even without a specific idea of what kind of hacking would be performed. Things like being being able to power down and reboot a system, storing a backup and being able to run it in a "safe" offline mode are all standard resilience measures for software systems. These measures aren't necessarily the first thing that would come to mind if you were trying to model a specific risk like a password getting stolen, or a hacker subverting administrative privileges, although they would be very useful in those cases. So mitigating risk doesn't necessarily require a precise idea of the risk to be mitigated. Sometimes it can be done instead by thinking about the principles required for proper operation of a system - in the case of its software, preservation of its clean code - and the avenues through which it is vulnerable - such as the internet.

So what would be good robustness measures for human civilisation? I have a bunch of proposals:

 

Disaster identification

Disaster research

* Build research labs to survey and study catastrophic risks (like the Future of Humanity Institute, the Open Philanthropy Project and others)

Disaster prediction

* Prediction contests (like IARPA's Aggregative Contingent Estimation "ACE" program)

* Expert aggregation and elicitation

 

Disaster mitigation

General mitigation

* Build research labs to plan risk-mitigation measures (including the Centre for Study of Existential Risk)

* Improve political systems to respond to new risks

* Lobby for mitigation measures

* Build a culture of prudence in groups that run risky scientific experiments

* Building systems for disaster notification

* Improving the foresight and clear-thinking of relevant decision-makers

Preventing large-scale violence

* Improve focused surveillance of people who might commit large-scale terrorism (this is controversial because excessive surveillance itself poses some risk)

* Improve cooperation between nations and large institutions

Preventing catastrophic errors

* Legislating for individuals to be held more accountable for large-scale catastrophic errors that they may make (including by requiring insurance premiums for any risky activities)

 

Disaster recovery

Shelters

* Build underground bomb shelters

* Provide a sheltered place for people to live with air and water

* Provide (or store) food and farming technologies (cf Dave Denkenberger's *Feeding Everyone No Matter What*

* Store energy and energy-generators

* Store reproductive technologies (which could include IVF, artificial wombs or measures for increasing genetic diversity)

* Store information about building the above

* Store information about building a stable political system, and about mitigating future catastrophes

* Store other useful information about science and technology (e.g. reading and writing)

* Store some of the above in submarines

* (maybe) store biodiversity

 

Space Travel

* Grow (or replicate) the international space station

* Improve humanity's capacity to travel to the Moon and Mars

* Build sustainable settlements on the Moon and Mars

 

Of course, some caveats are in order. 

To begin with, one could argue that surveilling terrorists is a measure specifically designed to reduce the risk from terrorism. But there are a number of different scenarios and methods through which a malicious actor could try to inflict major damage on civilisation, and so I still regard this as a general robustness measure, granted that there is some subjectivity to all of this. If you know absolutely nothing about the risks that you might face, and the structures in society that are to be preserved, then the exercise is futile. So some of the measures on this list will mitigate a smaller subset of risks than others, and that's just how it is, though I think the list is pretty different from the one people think of by using a risk-specific paradigm, which is the reason for the exercise.

Additionally, I'll disclaim that some of these measures are already well invested, and yet others will not be able to be done cheaply or effectively. But many seem to me to be worth thinking more about.

Additional suggestions for this list are welcome in the comments, as are proposals for their implementation.

 

Related readings

https://www.academia.edu/7266845/Existential_Risks_Exploring_a_Robust_Risk_Reduction_Strategy

http://www.nickbostrom.com/existential/risks.pdf

http://users.physics.harvard.edu/~wilson/pmpmta/Mahoney_extinction.pdf

http://gcrinstitute.org/aftermath

http://sethbaum.com/ac/2015_Food.html

http://the-knowledge.org

9

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since: Today at 4:43 PM

Thanks. I agree that we should do cross-cutting work that addresses several or all catastrophic risks. At the same time, the catastrophic risks are so dissimilar (e.g. asteroids, AI, and synthetic biology have little in common) that many of the more effective interventions will be risk-specific.

It is also worth noting that prevention work in general seems more risk-specific than recovery work (response work might be somewhere in between). Also, note that for some risks (e.g. AI, asteroids), there is a risk that there would be no chance of recovery after a disaster.

Another relevant distinction is that between object-level interventions, which reduce X-risk directly, and meta-level/capacity-building interventions (e.g. setting up new X-risk institutions, raising awareness about X-risk among policy-makers), which reduce X-risk because we anticipate that they will enable us to do object-level work more effectively later on. Capacity-building is more often cross-cutting, and is plausibly quite important relative to object-level work at this point in time.

Additions:

  • space travel could include more details, like lowering launch costs, and stuff like what Deep Space Industries is doing with asteroid mining (in some ways making money from mining asteroids is kind of an instrumental goal for them, with the terminal goal being to get humans living in space full-time as opposed to just being on the ISS briefly)
  • preventing large-scale violence could include some component about shifting cultural zeitgeists to be more open and collaborative. This is hella hard, but would be very valuable to the extent that it can be done
  • I would add something like "collecting warning signs" under disaster prediction. For instance, what AI Impacts is doing with trying to come up with a bunch of concrete tasks that AIs currently can't beat humans at, which we could place on a timeline and use to assess the rate of AI progress. There might be a better name than "collecting warning signs" though.

Good post! I'd suggest adding making more people aware of the problem of existential risks, and the need to do something about these risks. So maybe outreach about these topics should be one of the things to be done.

  • Prediction contests (like IARPA's Aggregative Contingent Estimation "ACE" program)

An issue with this, of course, is that if you think a disaster is likely to be existential (or even just kill you), you don't have an incentive to predict it. This seems helpful for the sort of dynamic you laid out on top where a bunch of different issues can cause catastrophe but any one of them might not be existential on its own, but if a threat is sufficiently catastrophic this might not work.

Precisely! So you get people to predict smaller catastrophes and proxies for increasing risk level instead. Formulating the right questions and putting the answers together to estimate x-risk are the challenges.