Hide table of contents

This is a theoretical discussion outlining the need for an additional conceptual piece in the EA lexicon and the value of this concept. This still doesn't feel as a complete, well-defined set of concepts and I would greatly appreciate feedback in improving the thoughts I’ve laid out here. Thanks to Vaidehi Agarwalla for proofreading.

Top Level Statements

Civilisation collapse is likely to incur significant costs in terms of long-term utility and to impair the resilience of humanity to existential risks.

The risk of our civilisation collapsing has an equal or higher probability than the associated existential risk for every risk event.

Collapse risks should be evaluated alongside existential risks and given credence in longtermist analysis.


Introduction

The mainstream of effective altruist thought has undergone a conceptual evolution towards longtermist notions and “safeguarding the future”. Nick Bostrom’s work on existential risks laid some of the foundations of "longtermism" and the valuing of the continued existence of the human race as a generator of positive experiences. This has led to a focus on the avoidance of existential risks, which I believe to be relatively narrow given the range of competing risks that exist.

Among those the most salient and of greatest cost to the welfare of the long-term future is a class of potential events that result in the reversion of human existence to a degraded scientific and technological understanding. Largely due to advances in these fields, we have developed healthcare and social systems to allow for a much greater quality of life than in the past, so such a regression would also imply a significant loss of life and/or quality adjusted life years (QALYs) during the period of reduction and for many future generations. Many more QALYs will have been missed due to the failure of our society to progress at its current rate.

This article will make a case for why we should care about why collapse risks (C-risks) are a neglected area in EA thinking and priority based on the expected outcomes from a longtermist perspective.


Definitions

I will be using the following definitions for various terms that will be further explained later in this piece and an illustration of their usage is given in Figure 1.


Figure 1: An illustration of the meanings of different words used in this piece.


Existential Risk (X-Risk): One where an adverse outcome would either annihilate Earth originating intelligent life or permanently and drastically curtail its potential. – Bostrom, 2002

Collapse Risk (C-Risk): The risk of an event causing the rapid and significant decline in science and technology, culture and social cohesion of all advanced societies globally.

Bounce: The reimplementation of a substantial core of modern technology within one human generation.

Dark Age: A period of time lasting more than one human generation with significantly impaired technology globally, failing to recover to the pre-collapse peak.

Renaissance: The rediscovery and reimplementation of pre-collapse technology, cultural values, and/or organisation after a Dark Age.

The reason the terms “bounce” and “Renaissance” are bounded by a generation (approx. 50 – 100 years) is the capacity for future generations to learn skills required to research and fabricate current technology, which will be forgotten if the knowledge isn't passed to younger generations. As such this time gap seems relevant to the capacity for a collapsed civilisation to “bounce” and recover close to our current trajectory. The concept of a Dark Age that lasts less than the average human lifetime also seems out of place. Dark Ages can be indefinitely long on the condition that there is a form of civilisational recovery, otherwise this would be classified as an existential crisis.

EDIT: I also need to clarify how C-Risks are different from global catastrophic risks (GCRs). This is a broader concept that may include local collapse, the Spanish flu, etc. and aren't specifically linked to the technological capacity of the human race to respond to X-risks. As such C-risks are a subset of GCRs, but I find GCRs to be too general a concept that don't have a clear end result, whereas C-Risks have a similar set of characteristics in their aftermath and are so defined.

Valuing Collapse Prevention

Welfare

Avoiding extinction clearly has great value for adding good in the world, by facilitating the existence of morally valuable lives in the future. Avoiding collapse also facilitates the existence of many extra valuable lives in the future. If we make an equivalent graph we can use a similar approximation to the astronomical waste argument. The loss of welfare is the difference in integrated area between our current trajectory and the Renaissance as shown in Figure 2.


Figure 2: Illustration of modified astronomical waste argument in the case of C-risk.

Clearly while there is more welfare lost in the case of an existential event, there can be a substantial loss in the case of a collapse risk, over the fullness of time, and with the length of the Dark Age being a key factor in the extent of that loss.

Inability to Mitigate Existential Risks

In a further crude generalisation, it is possible to think of the technological level of a civilisation being linked to different types of existential threat it can face. A paleolithic tribal civilisation is unlikely to create a general artificial intelligence unaligned to their interests that proceeds to wipe them out, and on the other hand an advanced multi-planet species is unlikely to be wiped out by an asteroid impact, and is significantly more equipped to detect a potential threat and mitigate or prepare for it.

A rough classification would be anthropogenic X-risks and natural X-risks. The former category includes bioengineered pandemics, unaligned superintelligence among a long list, and in the latter category are supervolcano activity, solar flares, gamma ray bursts and others. Scaling with a Dark Age’s length is an increased likelihood that an existential threat will occur that would otherwise have been mitigated if the technological level of society had been on its current trajectory. Figure 3 illustrates this with a few examples. As such, a Dark Age can be thought of as a period of decreased mitigation ability against natural X-risks.


Figure 3: Potential X-risks left unmitigated through a lack of response capacity during a Dark Age.

Additionally, once humanity enters a Renaissance period it will be in no better position to deal with self-created anthropogenic X-risks that we haven’t reached on our current trajectory at the time of collapse. Those already discovered and mitigated (the “Great Filters” that we have already surpassed) will be of benefit if the solutions are rediscovered from us, and those that we have discovered and haven’t yet found a solution for, such as nuclear weapons, may help or hinder depending on their response to the rediscovery.


Value Drift During a Dark Age

In Bostrom’s paper another mentioned existential risk is the ability for the future of our civilisation to deviate sufficiently from our set of values as to render this version of humanity meaningless from today’s perspective, similar to the ship of Theseus problem. Under the effects of a cultural amnesia through multiple generations in highly volatile situations, loss of knowledge and technology, and regression to more fundamental modes of social organisation, it is possible that a Dark Age also represents a time of heightened risk for this type of X-risk to manifest.

If we were to take the analogy of a vector in multi-dimensional space, the extent to which the values of a future civilisation deviates from ours could be seen as the cross product of our value sets. A Dark Age would increase the drift of this vector away from the original until it would possibly be aligned negatively compared to our current values.

The subjectivity of moral and ethical values and the tolerance of value drift in society (depending on which society is meant) seems to be highly debatable and is beyond the scope of this article.


Present Actions Affecting the Trajectory of the Renaissance

There are many ways in which the existence of our civilisation will affect a future civilisation after a Dark Age. For example, in developing through the agricultural and industrial revolutions, humanity has stripped vast quantities of natural resources from the planet, usually starting with the depletion of those easiest and most profitable to access. As such the ability for a future human civilisation to recover our development trajectory is likely to be impaired significantly. This is likely to impact the viability of upscaling reverse engineered modern technology for the future civilisation.

The ability for a future civilisation to achieve a Renaissance will be heavily impacted by the survival of relevant texts in a form accessible to that culture, or reverse engineering current works, but also by a healthy dose of chance depending on the method of collapse and how much is preserved through a Dark Age. Previous examples of technologies that were lost for nearly millennia, despite the abundance of human progress in that time, include such simple technology as concrete designed for marine environments and Damascus steel.

Anthropogenic climate change, pollutants and bioaccumulating toxins may be other factors that significantly impede a transition out of the Dark Ages or may represent an X-risk post-collapse. As such different actions taken today that could affect the trajectory of recovery could also substantially improve or degrade the prospects of the long-term future.

Scale, Probability and Urgency

This section discusses various parameters important to the evaluation of C-risks in the same light as X-risks, in addition to the ITN framework that evaluates cause areas in terms of importance, tractability and neglectedness. The following three additional factors could be seen to be sub-factors of the "importance" metric.


Scale

There is a case to be made that we currently live in one vast, interconnected, global civilisation comprised of technologically advanced nations. The history of anatomically modern humans is littered with the collapse of civilisations who were the pinnacle of their time period, resulting in a regional Dark Age where many technologies and cultural advances were temporarily or permanently lost to time. One of the most significant examples of this is detailed in the book 1177 by Dr. Eric Cline, where a series of connected but sovereign empires collapsed around the Mediterranean and a far as Babylonia including the Egyptians and proto-Greek Mycenaeans, with a significant role played by their .

In the modern day the cultural, financial and technological aspects of different nations are interconnected and interdependent more than any other time in history across the planet. Many advanced technologies require components from dozens of countries around the world and international shipping of raw materials and components to different sites to further the development towards the desired product. The financial crash of 2008 made apparent that our financial systems are also heavily interdependent.

There are essentially no technologically advanced societies that aren’t part of the connected world, stemming in part from neoliberalism, and as such, are more likely than before to cause systemic feedbacks affecting many others whether this be positive or negative. The ability for countries and international bodies to serve resilience functions in the case of a localised collapse can help to prevent this propagating into more a more widespread C-risk or could serve as an amplification of the previous chaos. It seems highly likely that for at least a class of C-risks, such as nuclear war, the increased interconnectedness of the world serves more as an amplifier than a source of resilience, if the Cold War and other recent history is to serve as any evidence.


Probability

An X-risk occurring is conditional on a C-risk occurring before, or at the same time as it, as people must exist in order for any civilisation to exist. There are some slightly non-analogous classes of risk such as value drift where an existential crisis can take place while people continue to live in a technologically advanced and developing society that nevertheless becomes morally regressive in our opinions. Whether this would then count as a continuation of our same civilisation is debatable.

The probability of a C-risk or X-risk event occurring seems highly relevant when prioritising cause areas. If collapse occurs, many other X-risks become less relevant as for new anthropogenic X-risks to arise civilisation must have reached that level of technology, which won't occur in a Dark Age.

In the extreme a C-risk will entail an immediate X-risk, such as an extraterrestrial demolition of our planet (possibly to clear space for an interstellar highway). In other cases these two risks from the same cause may be highly disjointed depending on the severity of the event. For example, an airborne and infectious pandemic that is lethal to 99.99% of the population will still leave over 700,000 people alive on the planet, so is still relatively unlikely to become an existential risk but would qualify as a collapse risk. If it killed 100% of the population then it would clearly qualify as both types of risk.

Many of the most pressing threats to the humanity are far more likely to cause collapse than be an outright existential threat with no ability for civilisation to recover. From there, it may be decades or centuries of human existence before an eventual Renaissance based on a huge variety of complicating factors to either prevent or facilitate it. As such it is quite difficult to answer questions about whether something constitutes an X-risk when there is an indefinite period during which humanity, if not extinct, could recover. This often leads to simplifying the question to the outright risk of extinction, when this too has a complicating factor due to second order effects. If the an event causes the civilisation to collapse and a secondary X-risk then wipes out life on Earth that could have been avoided, the first event has indirectly contributed to the extinction event occurring.


Urgency

We don’t appear to have multiple “runs” at this timeline, and so the first event that renders humanity collapsed or extinct will render all other modes of failure irrelevant. As such we should give higher priority to events that are most likely to happen, and are likely to happen soon, that will sufficiently derail the course of human progress as to render mitigation efforts for other causes (especially anthropogenically created X-risks) obsolete.

These could function as additional factors to the usual ITN framework of impact, tractability and neglectedness. C-risks therefore become increasingly prominent in the calculus of which problems are most impactful to work on.


Conclusions

C-risks are an important and understudied field that heavily overlap with existential risks, but significantly alter the relative weighting of different cause areas. The current EA prioritisation frameworks don’t account sufficiently for timescale and likelihood of C-risks, which are required when only one of these events can occur.

The concept of C-risks and the appropriate use of the terminology may help to foster additional efforts with the aim to mitigating these scenarios or improving the ability to bounce back from them, minimising the length of Dark Ages and leading to a faster Renaissance in the event of their occurrence.



39

0
0

Reactions

0
0

More posts like this

Comments10
Sorted by Click to highlight new comments since:

Good to see more people thinking about this, but the vocabulary you say is needed already exists - look for things talking about "Global Catastrophic Risks" or "GCRs".

A few other notes:

It would help if you embedded the images. (You just need to copy the image address from imgur.)

" with a significant role played by their . " <- ?

" the ability for the future of our civilisation to deviate sufficiently from our set of values as to render this version of humanity meaningless from today’s perspective, similar to the ship of Theseus problem. " <- I don't think that's a useful comparison.


Thanks for the comment! I am aware about the GCRI and GCRs but I don't see the term getting used much and (in both cases) seem to get conflated with X-risks, but I haven't addressed this at all in the piece so I will add an edit.

Thanks for catching the typo. I've been trying to embed the images but it hasn't been working, so I'm contacting support for help.

The analogy I was making was that socially held values are liable to change and (usually) improve over time, and any specific value might not disqualify a future civilisation from being counted as valuable by us today, but at some point in the future there may be sufficient drift to make that claim. This may happen gradually and piecemeal as in the ship of Theseus. The full thought experiment also mentions restoration of rotting parts and asks whether these are also the ship of Theseus, similar to a Renaissance period.

These are not the same thing. GCR is just anything that's bad on a massive scale, civilization doesn't have to collapse.

There are a variety of definitions, but most of the GCR literature is in fact concerned with collapse risks. See Nick Bostrom's book on the topic, for example, or Open Philanthropy's definition: https://www.openphilanthropy.org/research/cause-reports/global-catastrophic-risks/global-catastrophic-risks

Has anyone inquired if primitive segregated communities could function as a (low-cost) kind of "ark" - a hedge for our species, in secluded environment protected from some global catastrophes?. My guess is that they would probably survive pandemics, or a "simple" collapse of civilization. But they would be quite fragile to environmental changes; perhaps there could be a way to make them more resilient?

I've been thinking for a while civilisational collapse scenarios impact some of the common assumptions about the expected value of movement building or saving for effective altruism. This has knock on implications to when things are most hingeist.

I agree that "Dark age mitigation" is a very neglected area; but I am not so sure about "Dark age prevention". Governments and scholars are often concerned about long-term stable growth, even outside EA. In addition, I wonder what a general policy aiming at "minimising the length of Dark Ages and leading to a faster Renaissance" would look like. If it were in Asimov's Foundation style, it'd likely be secret (specially if you are concerned about world war scenarios).

One of the objection I've heard from talking about s-risks separate from x-risks is that s-risks are already fully captured by the x-risk category and nothing is gained by making them distinct. A similar objection could reasonably be applied here, that c-risks are just a small subset of x-risks and not worth considering as a separate category in need of a name (rather they would just be x-risks from civilization collapse).

Do you have any thoughts on this? For example, can you think of cases that are c-risks but not x-risks such that they don't entirely overlap, like maybe a collapse scenario that does not pose an existential risk?

I have tried to lay out in the section on probability, above, that most X-risks are a subset of C-risks as they collapse usually has to happen before an existential event. Most X-risks in their moderate form, such as a nuclear winter lasting a few years, moderate climate change, or a global pandemic seem much more likely to pose a C-risk than an X-risk.

I just skimmed the post.

Many of the most pressing threats to the humanity are far more likely to cause collapse than be an outright existential threat with no ability for civilisation to recover.

This claim is not supported, and I think most people who study catastrophic risks (they already coined the acronym C-risk, sorry!) and x-risks would disagree with it.

In fact, civilization collapse is considered fairly unlikely by many, although Toby Ord thinks it hasn't been properly explored (see is recent 80k interview).

AI in particular (which many believe is easily the largest x-risk) seems quite unlikely to cause civilization collapse or c-risk without also x-risk.

From what I understand, the loss of welfare is probably much less significant than the decreased ability to prevent X-risks. Although, since X-risks are thought to be mostly anthropogenic, civilization collapse could actually significantly reduce immediate x-risk.

In general, I believe the thinking goes that we lose quite a small fraction of the light cone over the course of, e.g., a few centuries... this is why things like "long reflection periods" seem like good ideas. I'm not sure anyone has tried to square that with simulation hypothesis or other unknown-unknown type x-risks, which seem like they should make us discount much more aggressively. I guess the idea there is probably that most of the utility lies in universes with long futures, so we should prioritize our effects on them.

I suspect someone who has more expertise on this topic might want to respond more thoroughly.

Curated and popular this week
Relevant opportunities