Nick Beckstead lays down the arguments for thinking about extinction risks and non-extinction risks at the same time, with the possible exception of the case of AI, at the GiveWell blog.

5

0
0

Reactions

0
0
Comments5
Sorted by Click to highlight new comments since: Today at 7:23 PM

This is a comment from Jim Terry, reposted with permission (none of it mine)

There is essentially no precedent for level 1 catastrophes" is followed by immediately listing at least one level 1 catastrophe, by his previous definition. ("Hundreds of millions of people;" the Black Death qualifies by body count, depending on your estimates; the others would count if you adjust proportionally for world population.) If we use the retro rated threshold of 5% or more of the global population dying (350m, in today's terms), the Mongol conquests (100m, 20-25%), the Wars of the Three Kingdoms (40m, 10-25%), the Plague of Justinian (25-50m, 10-25%) and potentially the Native American die-out consequent to the Columbian exchange (estimates are hard) count. (Note that all of these except for the plague of Justinian were spread over decades, but even doing some generational amortization, all of them except the Native American die-out likely make the cut anyway.)

"For the most part, these events don’t seem to have placed civilizational progress in jeopardy."

Wild speculation! We don't know the counterfactual scenarios. My off-the-cuff counter-speculation is that if not for the Plague of Justinian, we might be settling Alpha Centauri by now, and looking back at the possibility of still being an earthbound civilization in the third millennium as a grim dark alternate history.

To point to specific past events that probably should be considered level 1 catastrophes, not just by death toll but by impact, the Mongol Conquests are a plausible explanation for why the Muslim world didn't continue to be dramatically more enlightened and advanced than non-ERE Europe. 1258 was one of those watershed years in history, after which the future of Islam was a lot grimmer. Mongols also had a dramatically bad impact on the Russian cultural bloc, too (viz., they overran them and infected them with their values), which did some bad things to human progress. Generally speaking, Nick's worries about what might happen to social progress following a level 1 catastrophe all in fact did happen in this instance. Worries about the stall to scientific progress are validated here, too; the loss of the House of Wisdom is probably the most dramatic example, but flourishing scientific progress took a downturn.

Also, consider the fall of the Western Roman Empire. It was a catastrophic event widely thought to have had a significant negative impact on technological and social progress but without a particularly impressive direct death toll.

Both of these tie into the disastrous repercussions of the plague of Justinian--Justinian later became known as the emperor who reconquered Italy and large portions of the Med. If not for the plague weakening the ERE by killing 40% of his bros, things might have gone very differently. (Potential outcomes: no Middle Ages, ERE hegemony over the West and Arab world; Mongol aggression confined to East Asia, because horse archers don't do as well against automatic weapons.)

Interestingly enough, the one non-modern global catastrophe the author was aware of actually may have had some positive social impact. It's a controversial historical view, but the thought is that the Black Death may have created some space (....where a lot of people used to be...) for the Renaissance to blossom.

http://www.gmanetwork.com/.../the-black-death-gave-life...

Copied from my comment on a Facebook post:

I especially liked Nick's sapling analogy, and found it fitting. I worry that EAs are drawn from subgroups with a tendency to believe relatively simple formalistic and mechanistic processes essentially describe complex ones, with perhaps a decrease in accuracy (relative to more complex models) but not in the general sign and magnitude of the result. This seems really dangerous.

"Imagine a Level 1 event that disproportionately affected people in areas that are strong in innovative science (of which we believe there are a fairly contained number). Possible consequences of such an event might include a decades-long stall in scientific progress or even an end to scientific culture or institutions and a return to rates of scientific progress comparable to what we see in areas with weaker scientific institutions today or saw in pre-industrial civilization." It seems likely that any Level 1 event will have disproportionate effects on certain groups (possibly ones that would be especially useful for bringing civilization back from a level 1 event), and this seems like a pretty under-investigated consideration. A pandemic that was extremely virulent but only contagious enough to spread fully in big cities. Or extreme climate change or geoengineering gone awry knocking out mostly the global north or mostly equatorial regions or coastal regions.

He doesn't really discuss the possibility of a Level 1 event immediately provoking a Level 2 event, but that also seems possible (for example, one catastrophic use of biowarfare could incentivize another country to develop even more powerful bioweapons, or to develop some sort of militaristic AI for defense. Or catastrophic climate change could cause the use of extreme and ill-tested geoengineering). This actually seems moderately likely, and I wonder why he didn't discuss it.

From the spreadsheet linked there ( https://docs.google.com/spreadsheets/d/1b7ohoyAi2MlyBOzgarvJ-bOE2v9mJ8a9YDfQYGNk9vk/edit#gid=1273928110 )

Does anybody find the row on Anthropogenic climate change (other than geoengineering) puzzling in the sense that it seems to be not given sufficient priority?

"Not many suitable remaining funding opportunities" for "R&D on clean tech, adaptation preparations, and working toward carbon pricing are all possibilities but all generally highly funded already."

The likelihood of highest-damage scenario over the next 100 years is categorized on the same level as AI risk ('Highly uncertain, somewhat conjunctive, but plausible').

Climate change is a very crowded space, and AFAIK geoengineering is the only cost-effective climate change intervention (I haven't really researched this but that's my impression). There's already tons of research going into e.g. clean energy, so marginal research is not very valuable. Geoengineering by contrast is a lot less crowded and potentially much more cost-effective.

I was puzzled!

Curated and popular this week
Relevant opportunities