Hide table of contents

This is a writeup of a finding from the Causal Networks Model, created by CEA summer research fellows Alex Barry and Denise Melchin. Owen Cotton-Barratt provided the original idea, which was further developed by Max Dalton. Both, along with Stefan Schubert, provided comments and feedback throughout the process.

 

This is the final part of a multipart series of posts explaining what the model is, how it works and our findings. We recommend you read at the ‘Introduction and user guide’ before this post, to give the correct background on our model. The series has the following structure:

  1. Introduction & user guide (Recommended before reading this post)

  2. Technical guide (optional, description of the technical details of how the model works)

  3. Findings (writeup of all findings)

  4. Climate catastrophe (this post)

 

The structure of this post is as follows:

 

  1. Introduction

  2. Predicting probabilities

  3. Implications

  4. Conclusion

 

1. Introduction

Many people in the Effective Altruism community think that the value of the far future, and the vast number of potential future humans that could exist there, mean that a top priority is working to prevent existential risks that could wipe out humanity or significantly curtail our future potential.

 

By far the majority of the research into such risks currently being conducted by the EA community is on the risks from superintelligent AI, with some additional work being done on risks from synthetic biology and nuclear war. When listing examples of existential risks (and the closely related global catastrophic risks or GCRs [1]), the potential of runaway climate change turning out to be a GCR is also sometimes mentioned [2]. However there seems to have been comparatively little work undertaken on trying to estimate the likelihood of runaway climate change, or the chance of it being a GCR, despite climate change being one of the better studied and data rich areas outside of the EA community. There also seem to be very few EAs working on object level problems in climate change.

 

When working on the Causal Networks Model we considered climate change as a variable due to its interconnectedness, both in terms of being affected by many of the actions we take and itself affecting many of the outcomes we care about. We also attempted to estimate climate change’s influence on existential risk, and we integrated this into the model, leading to this post. However the arguments, and their conclusions as laid out below, do not require or rely upon the rest of the model.

 

2. Predicting probabilities

 

The IPCC’s 2015 climate model predicts an approximately 10% chance of 6+ degrees of warming by 2100 under mid to high emissions portfolios [3], with 8 degrees or higher of warming being hard to rule out due to the nature of the uncertainty in the models [4].

 

These levels of warming - whilst unlikely to cause anything like human extinction directly, nevertheless have the potential to be a GCR [5]. These is because they could cause very significant changes in agricultural productivity, rendering much currently farmed land barren, as well as increasing the number and severity of many kinds of natural disasters, amongst many other effects.

 

The combined impact of all these simultaneous stressors being applied globally does not seem to be well studied, but it appears plausible these have a >20% chance of acting as a GCR and leading to the effective destruction of the global economy.

 

Once in this state of 6+ degrees of warming and a collapsed global economy, it again seems plausible (although very uncertain) [6] that the inhospitality of the new climate would render humanity permanently unable to recover to our current levels of technological / civilisational sophistication. This would this act as a “Loss of potential” x-risk [7].

 

Whilst the latter two stages of the argument are quite speculative, this is no worse than the case for other existential risks, and it seems hard to defend a <0.1% chance of existential risk from runaway climate change before the late 2100s, with estimates as high as a few percent also seeming reasonable.

 

3. Implications

 

The fact that runaway climate change has a significant chance of being an existential risk raises a number of important implications:

1. Anything that increases the risk of runaway climate change (e.g. emitting more CO2) should be considered to be damaging on existential risk scales. This is in contrast to most existential risks where almost all ‘unrelated’ activities do not affect the risk: for example, you should not expect any of your day-to-day activities to influence the chance of global nuclear war.

 

One particular implication is that any activity one expects to cause net CO2e emissions and not correspondingly reduce existential risk in some other way should be considered to be likely to have a significantly negative impact. As well as things such as driving and jet travel, this could potentially also apply to activities currently considered robustly good in other ways, such as donating money to global poverty charities, or improving the welfare of farmed animals, both of which seem likely to increase CO2e emissions. (See ‘cage-free costs’ in Part III for an elaboration of the latter point)

 

2. Conversely, decreasing the risk of runaway climate change (for example, by researching potential geoengineering solutions or donating to Cool Earth) could potentially be an effective way to reduce existential risk. Whether or not there is comparative value in becoming a researcher in this area seems to depend to a large degree on whether you expect conventional climate change research to adequately cover the tail risks.

 

There also seems to be a particular appeal to this sort of action, because the arguments for runaway climate change as an existential risk seem less speculative [8] than those for some other existential risks; most of the uncertainty comes from the likelihood of a GCR leading to extinction. Therefore if you were convinced of the value of preventing GCRs but sceptical of the value of research in these areas, reducing emissions might fill an ethical niche.

 

3. Due to the comparative strength of the arguments for runaway climate change as an existential risk, and the relatively concrete estimates of its probability, it seems like a good candidate to be used as an example when introducing the concept of existential risks.

 

4. Conclusion

 

There seem to be good arguments in favour of runaway climate change as a potential risk. Although one might consider other existential risks as higher priority due to increased likelihood, neglectedness or proximity, runaway climate change has a couple of unique features that seem worth exploring. The first is that the interconnected nature of climate change means that many innocuous seeming acts may be predictably increasing existential risk. The second is the relative neglectedness of climate change within Effective Altruism.

 

-------------------------------------------------------------------------------------

 

This concludes our series of posts on the Causal Networks Model - we hope they have been informative. If you are interested, as mentioned in Part I, you can access the model yourself to see how different assumptions affect the results.

Feel free to ask questions in the comment section, or email us (denisemelchin@gmail.com or alexbarry40@gmail.com).

 

-------------------------------------------------------------------------------------

 

[1] Defined as events that would kill at least 10% of the population of the Earth.

 

[2] See e.g. http://www.huffingtonpost.co.uk/simon-beard/climate-change_b_18110618.html or https://80000hours.org/problem-profiles/climate-change/

 

[3] http://www.ipcc.ch/pdf/assessment-report/ar5/syr/SYR_AR5_FINAL_full_wcover.pdf

 

[4] As discussed on page 279 here https://scholar.harvard.edu/files/weitzman/files/fattaileduncertaintyeconomics.pdf

 

[5] There does not seem to be very good discussion on this I could find, but see e.g. https://www.greenfacts.org/en/impacts-global-warming/index.htm for a (clearly motivated) elaboration of the impact of 4 degrees of warming. Very extreme cases are also considered briefly in [4].

 

[6] This seems to be the main weakness in the argument, and a place where people seem to reasonably significantly disagree. Whilst the arguments are fairly robust to different estimate of humanities likelihood of recovering, if one thinks humanity is very likely (95%+) to recover, then the argument loses significant bite compared to other existential risks.

 

[7] Discussed under “2.2. Permanent stagnation” here http://www.existential-risk.org/concept.html

 

[8] ‘Speculative’ may not be quite the right word here, I am more trying to convey that the type of risk here seems to be somewhat qualitatively different to that in the (say) AI risk case. In the climate change case most experts agree roughly on the probability of the bad outcome, we just have empirical uncertainty. This is opposed to the AI risk case where there is significant disagreement by experts about the level of risk. It thus seems that there should perhaps be some outside view considerations or similar that favour the climate change case.

 

Comments2
Sorted by Click to highlight new comments since: Today at 1:57 PM
[anonymous]6y7
0
0

One factor pushing against climate change being a >0.1% existential risk is that >6 degrees of warming would most probably take 150 years+ to happen because the oceans would absorb a large portion of the warming generated. By this time, it's plausible that we will develop artificial superintelligence, which will have (a) killed us already or (b) will enable us to solve the climate change problem by developing new forms of clean energy and carbon dioxide removal technology. Indeed, we are likely to get the tech mentioned in (b) even if we don't develop artificial superintelligence. This suggests that inferring from estimates of climate senstivity overstates the existential risk of global warming because it includes warming over 100 year+ timescales.

This suggests most of the risk comes from abrupt runaway irreversible warming. It's not clear what the risk of that is.

One problem is that with current technology, it is quite expensive to prevent extreme climate change. With emissions reductions, it is trillions of dollars. Even with solar radiation management (a type of geoengineering), it is tens of billions of dollars. Depending on the type of solar radiation management, it could result in rapid warming if turned off by another catastrophe, causing a double catastrophe. But there are adaptation techniques that are cheaper (~$100 million). And since these techniques protect against many other catastrophes, I'm pretty sure they are far more cost effective than preventing extreme climate change. But it would be interesting to compare quantitatively different interventions in your model.

Curated and popular this week