Hide table of contents

Question

Let's say we roll the dice 100 times with respect to values. In other words, let's say civilization collapses in 100 worlds, each very similar to our current world, and let's say full tech recovery follows collapse in all 100 of these worlds.

In how many of these 100 worlds do you think that, relative to pre-collapse humanity, the post-recovery version of humanity has:

  • worse values?
  • similar values?
  • better values?

I encourage the reader to try answering the question before looking at the comments section, so as to not become anchored.

Context

Components of recovery

It seems, to me, that there are two broad components to recovery following civilizational collapse:

  1. P(Tech Recovery|Collapse)
    • i.e., probability of tech recovery given collapse
    • where I define "tech recovery" as scientific, technological, and economic recovery
  2. P(Values Recovery|Tech Recovery)
    • i.e., probability of values recovery given tech recovery
    • where I define "values recovery" as recovery of political systems and values systems
      • (where "good" on the values axis would be things like democracy, individualism, equality, and secularism, and "bad" would be things like totalitarianism)

It also seems to me that P(Tech Recovery|Collapse) ≈ 1, which is why the question I've asked is essentially "P(Values Recovery|Tech Recovery) = ?", just in a little more detail.

Existing discussion

I ask this question on values recovery because there's less discussion on this than I would expect. Toby Ord, in The Precipice, mentions values only briefly, in his "Dystopian Scenarios" section:

A second kind of unrecoverable dystopia is a stable civilization that is desired by few (if any) people. [...] Well-known examples include market forces creating a race to the bottom, Malthusian population dynamics pushing down the average quality of life, or evolution optimizing us toward the spreading of our genes, regardless of the effects on what we value. These are all dynamics that push humanity toward a new equilibrium, where these forces are finally in balance. But there is no guarantee this equilibrium will be good. (p. 152)

[...]

The third possibility is the “desired dystopia.” [...] Some plausible examples include: [...] worlds that forever fail to recognize some key form of harm or injustice (and thus perpetuate it blindly), worlds that lock in a single fundamentalist religion, and worlds where we deliberately replace ourselves with something that we didn’t realize was much less valuable (such as machines incapable of feeling). (pp. 153-154)

Luisa Rodriguez, who has produced arguably the best work on civilizational collapse (see "What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)?"), also only very briefly touches on values:

Values is the other one. Yeah. Making sure that if we do last for a really long time, we don’t do so with really horrible values or that we at least don’t miss out on some amazing ones. (Rodriguez, Wiblin & Harris, 2021, 2:55:00-2:55:10)

Nick Beckstead and Michael Aird come the closest, as far as I've seen, to pointing to the question of values recovery. Beckstead (2015):

  • Negative cultural trajectory: It seems possible that just as some societies reinforce openness, toleration, and equality, other societies might reinforce alternative sets of values. [...] Especially if culture continues to become increasingly global, it may become easier for one kind of culture to dominate the world. A culture opposed to open society values, or otherwise problematic for utilitarian-type values, could permanently take root. Or, given certain starting points, cultural development might not inevitably follow an upward path, but instead explore a (from a utilitarian-type perspective) suboptimal region of the space of possible cultures. Even if civilization reaches technological maturity and colonizes the stars, this kind of failure could limit humanity’s long-term potential.

Aird (2021):

  • My main reasons for concern about such [civilizational collapse] events was in any case not that they might fairly directly lead to extinction
    • Rather, it was that such events might:
      • [...]
      • Lead to "unrecoverable dystopia"
        • Meaning any scenario in which humanity survives and regains industrial civilization, but with substantially less good outcomes than could've been achieved. One of many ways this could occur is negative changes in values.

(emphasis added)

Clarifications

  • Although collapse is difficult to define, I'd like to avoid a Loki's Wager and so for the purposes of this question, I'll set "90% of the world's population dies" as what's meant by collapse.
    • (I'm aware that collapse can also be defined, for example, in terms of disaggregation of civilization, or in terms of x probability of dipping below the minimum viable population y years after the collapse event.)
  • The question of worse/similar/better values is an overall, net effect question. For instance, maybe there'd be a positive effect on value x but a negative effect on value y: I'm setting "values" to mean the overall sum of values, where I leave it up to the reader to decide which values they count and how much weight they give to each.
  • What I'm really pointing to with the question is something like, "How likely is it that civilizational collapse would change the long-run potential of humanity, and in which direction?". Which is to say, the question is intended to be answered through a long-run lens. If the reader thinks, for example, that in world z values in the couple hundred centuries after collapse would be worse, but would converge to the pre-collapse default trajectory in the long-run, then the reader's answer for world z should be "similar values".

Acknowledgements

This question was inspired by conversations with Haydn Belfield and Hannah Erlebach (though I'm not certain both would endorse the full version of my question).

65

0
0

Reactions

0
0
New Answer
New Comment

3 Answers sorted by

If fewer than 99% of humans die, I suspect that most of modern human values will be preserved, and so aside from temporary changes, I suspect values would stay similar, and potentially continue evolving positively, albeit likely with a delay and at a slower pace - but there would be a damaging collapse of norms that might not be recoverable from.

My response to the question:

  • worse values - 50/100
  • similar values - 20/100
  • better values - 30/100

This is more pessimistic than I expected/believe. (I didn't post my own answer just because I think it depends a lot on what collapse looks like and  I haven't thought much about that, but I'm pretty sure I'd be more optimistic if I thought about it for a few hours.) Why do you think we're likely to get worse values?

My answers for them (From an EA perspective) is generally the following:

Same values is 30% chance

Worse values 69% chance

Better Values 1% chance

Comments9
Sorted by Click to highlight new comments since:

Thanks for writing this up. This question came up in a Precipice reading group I was facilitating for last year. We also used the idea that collapse was 're-rolling the dice' on values, and I think it's the right framing. 

I recall that the 'better values' argument was:

  • We should assume that our current values are 'average'. If we reran the last three millennia of human history 1000 times (from 1000BC), we should assume that the current spectrum of values would be somewhere near the average of whatever civilization(s) would emerge in the early 21st century.
  • But if you believe in moral progress, the starting point is important. In all but the most extreme collapse scenarios, we'd assume some kind of ethical continuity and therefore a much better starting point than humans in 1000BC had.
  • Therefore a society that evolved post-collapse would probably lead to the emergence of better moral values.

The 'worse values' argument was:

  • We should not assume that our current values are average, we should rather assume that we've been uncommonly lucky (top ~10% of potential scenarios).
  • This is because most historical societies have had much worse values, and it has been by chance that we have managed to avoid more dystopian scenarios (multi-century global rule by fascist or communist dictatorships, and modern technologically-dominant theocracies).
  • Any collapse worthy of the name would lead to a societal reset, and a return to pre-industrial, even pre-agricultural, norms. We'd probably lose education and literacy. In that case, it would be very similar to re-rolling the dice from a certain point in history, therefore it's more likely that we would end up in one of those dystopian worlds.

We also discussed the argument that, if you're a longtermist who is very concerned about x-risk and you're confident (~70+%) that we would develop better values post-collapse, this may lead to the uncomfortable conclusion that collapse might be morally okay or desirable.

If I had to put a number on my estimates, I'd probably go for 55% better, 45% worse, with very high variation (hence the lack of a 'similar' option). 

"We should not assume that our current values are average, we should rather assume that we've been uncommonly lucky"

Why? That seems like a very weird claim to me - we've seen evolution of moral reasoning over time, so it seems weird to claim we wouldn't see similar evolution a second time.

The claim that we wouldn't see similar evolution of moral reasoning a second time doesn't seem weird to me at all. The claim that we should assume that we've been exceptionally / top 10%- lucky might be a bit weird. Despite a few structural factors (more complex, more universal moral reasoning develops with economic complexity), I see loads of contingency and path dependence in the way that human moral reasoning has evolved. If we re-ran the last few millennia 1000 times, I'm pretty convinced that we'd see significant variation in norms and reasoning, including:

  1. Some worlds with very different moral foundations- think a more Confucian variety of philosophy emerging in classical Athens, rather than Socratic-Aristotelian philosophy. (The emergence of analytical philosophy in classical Athens seems like a very contingent event with far-reaching moral consequences).
  2. Some worlds in which 'dark ages', involving decay/ stagnation in moral reasoning persisted for longer or shorter periods, or where intellectual revolutions never happened, or happened earlier. 
  3. Worlds where empires with very different moral foundations than the British/ American would have dominated most of the world during the critical modernisation period.  
  4. Worlds where seemingly small changes would have huge ethical implications- imagine the pork taboo persisting in Christianity, for example. 

The argument that we've been exceptionally lucky is more difficult to examine using a longer timeline. We can imagine much better and much worse scenarios, and I can't think of a strong reason to assume either way. But with a shorter timeline we can make some meaningful claims about things that could have gone better or worse. It does feel like there are many ways that the last few hundred years could have led to much worse moral philosophies becoming more globally prominent- particularly if other empires (Qing, Spanish, Ottoman, Japanese, Soviet, Nazi) had become more dominant. 

I'm fairly uncertain about this later claim, so I'd like to hear from people with more expertise in world history/ history of moral thought to see if they agree with my intuitions about potential counterfactuals.

I agree  that if we re-ran history, we'd see significant variations, but I don't think I have any reason to think our current trajectory is particularly better than others would be.

For example,  worlds where empires with very different moral foundations than the British/ American could easily have led to a more egalitarian view of economics, or a more holistic view of improving the world far earlier. And some seemingly small changes such as persistence of pork taboos, or adoption of anti-beef and vegetarian lifestyles as a moral choice don't seem to lead to worse outcomes.

But I agree that it's an interesting question for historians, and I'd love to see someone do a conference and anthology of papers on the topic.

I think that there is a question that is basically a generalization of this question, which is:

Will the mean values of grabby civilizations be better or worse than ours?

I have some thoughts on this but I think they aren't ready for prime-time yet. Happy to maybe do a call or something when both of us are free.

  • It depends on what causes the collapse
  • I believe that variance between possible levels of coordination after collapse matters more than variance between possible values after collapse (and I'm agnostic on tractability)

On Loki's Wagers: for an amusing example, see Yann LeCun's objection to AGI.

I like this question/ think that questions that apply to most x-risks are generally good to think about. a few thoughts/questions:

I'm not sure this specific question is super well-defined.

  • What definition of "values" are we using?
  • Is there a cardinal ranking of values you are using? I assume you are just indexing utilitarian values as the best and the further you get from utilitarian values you get the worse the values. Or am I supposed to answer the question overlaying my own values? 
  • Also not super relevant but worth noting that depending on how you define values, utilitarian values =! utilitarian outcomes. 

 

Then this is sort of a nit-picky point, but how big of a basket is similar?

To take a toy example, let's say values is measured on a scale between 0 and 100 with 100 being perfect values. Let's further just assume we currently are at a 50. I'd assume in this case it would make sense to make similar =(33,67)  so as to evenly kern the groupings. if say, similar = (49,51), then it seems like you shouldn't put much probability into similar. 

But then if we are at a 98/100, is similar (97,99)? It's less clear how we should basket the groups. 

Since you put similar as 20/100 I somewhat assumed that you were giving similar a more or less even basket size to worse and better but perhaps you put a lot of weight into the idea that we are in some sort of sapien cultural equilibrium.

For what it's worth, if we sort of sweep some of these concerns aside, and assume similar has about as much value space as better and worse, my estimates would be as follows:

  • better: 33/100  
  • similar: 33/100
  • worse: 33/100

But I agree with jack's sense that we should drop similar and just go for better and worse, in which case:

  • better: 50/100
  • worse:50/100

A cold take, but I truly feel like I have almost no idea at current. My intuition is that your forecast is too strong for the current level of evidence and research, but I have heard very smart people give almost the exact same guess. 

Curated and popular this week
Relevant opportunities