11

The expected value of the long-term future

I wrote an article describing a simple model of the long-term future. Here it is:

Summary:

A number of ambitious arguments have recently been proposed about the moral importance of the long-term future of humanity, on the scale of millions and billions of years. Several people have advanced arguments for a cluster of related views. Authors have variously claimed that shaping the trajectory along which our descendants develop over the very long run (Beckstead, 2013), or reducing extinction risk, or minimising existential risk (Bostrom, 2002), or reducing risks of severe suffering in the long-term future (Althaus and Gloor, 2016) are of huge or overwhelming importance. In this paper, I develop a simple model of the value of the long-term future, from a totalist, consequentialist, and welfarist (but not necessarily utilitarian) point of view. I show how the various claims can be expressed within the model, clarifying under which conditions the long-term becomes overwhelmingly important, and drawing tentative policy implications.

Comments (5)

Comment author: RyanCarey 29 December 2017 12:20:20AM 12 points [-]

Re page 9, I think the talk of a civilization maintaining exponential growth is unconvincing. The growth rate of a civilization should ultimately be bounded cubically (your civ grows outward like a sphere), whereas the risk is exponential. Exponentials in general defeat polynomials, giving finite EV in the limit of t, regardless of the parameters.

Comment author: Carl_Shulman 29 December 2017 12:45:42AM *  9 points [-]

That's our best understanding.

But there is then an argument on this account to attend to whatever small credence one may have in indefinite exponential growth in value. E.g. if you could build utility monsters such that every increment of computational power let them add another morally important order of magnitude to their represented utility, or hypercomputers were somehow possible, or we could create baby universes.

Comment author: [deleted] 04 January 2018 09:41:47PM 0 points [-]

FYI I ended up deciding to keep exponential growth in the main model, but I added a footnote discussing what happens in the limit of t. Thanks! :)

Comment author: JanBrauner 03 January 2018 04:50:24PM 5 points [-]

You write: "In this discussion, there are two considerations that might at first have ap- peared to be crucial, but turn out to look less important. The first such consid- eration is whether existence is in general good or bad, `a la Benatar (2008). If existence really should turn out to be a harm, sufficiently unbiased descendants would plausibly be able to end it. This is the option value argument. In turn, option value itself might appear to be a decisive argument against doing some- thing so irreversible as ending humanity: we should temporise, and delegate this decision to our descendants. But not everyone enjoys option value, and those who suffer are relatively less likely to do so. If our descendants are selfish, and find it advantageous to allow the suffering of powerless beings, we may not wish to give them option value. If our descendants are altruistic, we do want civilisation to continue, but for reasons that are more general than option value."

Since the option value argument is not very strong, it seems to be a very important consideration "whether existence in general is good or bad" - or, less dichotomous, where the threshold for a life worth living lies. Space colonization means more (sentient) beings. If our descendants are altruistic (or have values that we, upon reflection, would endorse), everything is fine anyway. If our descendants are selfish, and the threshold for a life worth living is fairly low, then not much harm will be done (as long as they don't actively value causing harm, which seems unlikely). If they are selfish and the threshold is fairly high - i.e. a lot of things in a life have to go right in order to make the life worth living - then most powerless beings will probably have bad lives, possibly rendering overall utility negative.

Comment author: [deleted] 04 January 2018 09:37:01PM *  2 points [-]

Thanks, excellent comment! :)

If our descendants are selfish, and the threshold for a life worth living is fairly low, then not much harm will be done (as long as they don't actively value causing harm, which seems unlikely).

They needn't actively value causing harm, suffice it that there be instrumental benefits to causing harm. e.g. factory farming today is the result of indifference, not cruelty.

I do agree that where the threshold lies becomes important if the threshold is likely to be close to the level of well-being that powerless beings enjoy. (e.g. I'm pretty confident that the majority of farmed animals today are pretty far below that threshold, but reasonable people could disagree.) I guess I should have made that clearer when I cite Benatar.