A paper I have written on a form of geoengineering known as 'stratospheric aerosol injection' has recently been published in Futures. The paper explores whether, assuming that reducing existential risk is overwhelmingly important, stratospheric aerosol injection should be researched. The following aspects are likely to be of some interests to EAs:
- It provides, to my knowledge, the most comprehensive existing discussion of the scale of the existential risk posed by climate change.
- It provides the most comprehensive and up to date discussion of geoengineering from an existential risk reduction point of view.
- The framework it uses to discuss the problem of 'moral hazard' may be of use in other domains (though the framework is David Morrow's, not my own).
The paper is available on my website, and on my academia page. All views are my own, not my employer's. All comments are welcome.
===
Abstract: In the wake of the continued failure to mitigate greenhouse gases, researchers have explored the possibility of injecting aerosols into the stratosphere in order to cool global temperatures. This paper discusses whether Stratospheric Aerosol Injection (SAI) should be researched, on the controversial ethical assumption that reducing existential risk is overwhelmingly morally important. On the one hand, SAI could eliminate the environmental existential risks of climate change (arguably around a 1% chance of catastrophe), and reduce the risks of interstate conflict associated with extreme warming. Moreover, the risks of termination shock and unilateral deployment are overstated. On the other hand, SAI introduces risks of interstate conflict which are very difficult to quantify. Research into these security risks would be valuable, but also risks reducing willingness to mitigate. I conclude that the decision about whether to research SAI is one of ‘deep uncertainty’ or ‘complex cluelessness’, but that there is a tentative case for research initially primarily focused on the governance and security aspects of SAI.
Highlights
- It is uncertain whether Stratospheric Aerosol Injection(SAI) research is justifiable, but a tentative case be made for security-focused research.
- SAI would eliminate the arguable environmental existential risks of climate change (<1% – 3.5%).
- It is extremely unclear whether SAI would reduce willingness to mitigate, and extensive efforts should be made to reduce the risk of mitigation obstruction.
- Termination shock risk is overstated.
- The risk of unilateral deployment is overstated, but SAI introduces other serious security risks.
I discuss this in the paper under the heading of 'unknown risks'. I tend to deflate their significance because SAI has natural analogues - volcanoes, which haven't set off said catastrophic spirals. The massive 1991 pinatubo eruption reduced global temperatures by 0.5 degreesish. There is also already an enormous amount of tropospheric cooling due to industrial emissions of sulphur and other particulates. The effects of this could be very substantial - (from memory) at most cancelling out up to half of the total warming effect of all CO2 ever emitted. Due to concerns about air pollution, we are now reducing emissions of these tropospheric aerosols. This could have a very substantial warming effect.
Concerns about model uncertainty cut in both directions and I think the preponderance of probabilities favours SAI (provided it can be governed safely). Estimates of the sensitivity of the climate to CO2 are also beset by model uncertainty. The main worry is the unprecedented warming effect from CO2 having unexpected runaway effects on the ecosystem. It is clear that SAI would allow us to reduce global temperatures and so would on average reduce the risk of heat-induced tipping points or runaway processes. Moreover, SAI is controllable on tight timescales - we get a response to our action within weeks - allowing us to respond if something weird starts happening as a result of GHGs or of SAI. The downside risk associated with model uncertainty about climate sensitivity to GHGs is much greater than that associated with the effects of SAI, in my opinion. SAI is insurance against this model uncertainty.
Good point. Agreed. Had not considered this
This seems like flawed thinking to me. Data from natural analogues should be built into predictive SAI models. Accepting that model uncertainty is a factor worth considering means questioning whether these analogues are actually good predictors of the ... (read more)