Epistemic status: there are some strong anti-death / pro-immortality sentiments in EA/rationalist circles, and it's bugged me that I haven't seen a good articulation of the anti-immortality case. This is a quick attempt to make that case. I'm confused about what to think about immortality overall; I think that it may depend a lot on circumstances/details, and be eventually desirable but not currently desirable.

The basic case for immortality goes something like:

A death is a tragedy. Individuals would, in almost all cases, prefer not to die. They have friends and family who care about them and would prefer they not die. In a large majority of the cases where they would prefer to die this is because of other factors they'd prefer to fix instead (e.g. extremely poor health). In a post-scarcity future we should expect to be able to fix these factors, so we should plan on abolishing death.

This argument mostly checks out to me when evaluating whether it's good for individuals to die. But my worry is that the whole line of thinking is too individualistic. Would it be good for society as a whole if people were immortal?

Historically I think the answer is not clear-cut, but my guess leans towards "no". If I imagine I have the option of pressing a button so that people 3,000 (or 1,000, or 300, or 100) years ago were immortal — not aging, and not dying of disease — I feel like I don't want to press that button. There would certainly be a lot of good things that would happen from it — many tragic deaths averted, many good things that persisted longer, and perhaps longer time horizons for individuals. But I am scared that:

  • For most of history, populations were food-limited, so this would lead to more deaths by starvation or violence, which seem probably worse
    • However in the modern world this might not be an immediate problem
      • We'd also have contraception available as a tool (although it's unclear whether it's good to give existing people more life rather than their children having lives)
      • It's possible that technology will move us to fundamentally different positions re. personal identity (e.g. because everyone is run as emulations, and these can be slowed down) before our inability to keep scaling food production exponentially causes hard constraints on population
  • The world would have been much more likely to get stuck locked into some bad states
    • A lot of repressive dictators die of natural causes
      • If they were immortal, it seems much more likely that one would have successfully conquered the world, and established a regime which stably kept them in control for eternity
    • Less extremely, there is the adage that science progresses one funeral at a time
      • I'm sure this is sometimes an exaggeration, but also feel sympathetic to the thought that if new ideas threaten existing social structures, and if the people instantiating those structures were permanent fixtures, there might have been much more repression of new ideas
    • It's a trope that the death of elderly relatives — even highly beloved ones — can be freeing for people
      • This doesn't point to any particular failure modes, but is some support for "death may play an important role in allowing society (as it is currently arranged) to move on from things"

These are still concerns to me today.

At a more personal level, I have some worry that the narrative of death-as-tragedy can belittle narratives of death-as-end-to-story-arc, and that sometimes the latter seem like they are capturing something true and important. For old people who have had a happy routine and watched their grandkids grow up and have kids of their own, and feel like they can let go rather than needing to do more in the world, I dislike telling them (implicitly or explicitly) that this is a failing.

Would I eventually like to move to a post-death world? Probably, but I'm not certain. For one thing I think quite likely the concept of "death" will not carve reality at its joins so cleanly in the future.

Is there too much death in the world today? Yes. If I could push a button to increase life expectancies by 10% or 50% I surely would (although my confidence is driven more by a sense of an implicit contract with all the humans alive now than consequentialist reasoning). If I could push a button to increase lifespans 100x or 10,000x I would be much more hesitant. (If there is a technological singularity this century that goes well then I would love for as many currently-alive people as possible to stay alive to see it, but I think in expectation the bigger impact of changes in lifespans would be to affect how such events might unfold.)

Comments41
Sorted by Click to highlight new comments since:

Just checking, are you aware of Sandberg's counterarguments to the immortal dictators point?

Robert Wiblin: So what about the possibility that life extension would actually be bad for society as a whole? I’ve heard a bunch of different theories, some that I don’t agree with that much, but some that seem plausible. One thing is in the past very often bad governments, authoritarian governments, have ended when the leader of the country dies because then there’s a reshuffling of who might be powerful, and you potentially get a transition to democracy just by increasing the variability of the outcome. If Kim Jong-un could live basically indefinitely, then it seems like North Korea’s prospects are much worse, because he’ll just be able to remain in power almost indefinitely, and he’s not going to die, and there’s not going to be any period of turbulence during which things could improve. Have you thought about that issue?

Anders Sandberg: Actually I have. So I’ve been playing around with a little statistical model of what is the role of aging in getting rid of bad political leaders, and it actually turns out that political science people have done lovely databases of political leaders, and you can get indices. You can even measure the ones that are most authoritarian. Then you can do a statistical model, a Cox proportional hazards model for those who are interested, to actually see the role of age in changing the probability of losing power, and it turns out that we can use this to model a world where these leaders don’t age, and on average they are in power four more years.

Robert Wiblin: Is that all?

Anders Sandberg: That is all.

Robert Wiblin: How long is that average time to begin with?

Anders Sandberg: Well the average time unfortunately tends to be something like 12 years or more, so-

Robert Wiblin: So it increases it by a third on average, but because it’s only 12 years to begin with, it’s not so bad.

Anders Sandberg: So if you’re a young dictator, you just come into power, at first you have a very high risk of losing power very quickly because you have a lot of enemies around. So typically the hazard is high at the start, and then they tend to decline, because authoritarian rulers get rid of their competitors. Then it stays relatively low, and then slowly goes up over time, and part of that increase is of course due to aging.

Now the interesting thing here is why do people lose power? Well, it turns out that being so old that you can’t hold on to power is a relatively rare thing. Relatively few dictators actually die at home in bed. In fact, most of them fall prey to the other scary people they surround themselves with. In that picture of the junta of the country, the other people in sunglasses surround the El Presidente, they are the ones to look out for because they are would-be dictators. They have a lot of power, and eventually they might get fed up on waiting. So if nobody were aging, I don’t think dictatorships would be changed that much.

In fact, if we want to think about negative effects of life extension, I think this might have a much bigger effect for example in academia. After all, the professor, if the professor is never really getting older, is just getting more and more skilled both in maybe education and even more in academic intrigue, they’re just going to hold on forever. There is this Pauli Principle that maybe science advances one funeral at a time. It’s debated. There have been attempts at actually investigating it, and the conclusion is somewhat mixed. Sometimes revolutions actually do sweep academia without replacing people. In some cases it does seem to be generational, but you can certainly imagine many institutions where it would be relatively easy to hold on to power indefinitely, but this is a very foreseeable problem, and I think the solution also is rather simple, term limits.

Robert Wiblin: Term limits. Yeah.

Anders Sandberg: Because in my dictator data, I of course also have political leaders from non-dictatorial countries, and they of course generally stay in power for one term or maybe two terms if they are really successful, and then they disappear, and we might want to have term limits for jobs. Maybe you’re not allowed to have the same job for more than a century. Then it’s time for somebody else to try.

I hadn't seen this discussion, thanks! I find the dictator data somewhat reassuring, but only somewhat. Because I care about not the average case dictator, but the tail of dictators having power for a long time. And if say 2% of dictators are such that they'd effectively work out how to have an ironclad grasp of their country that would persist for 1000+ years, I don't really expect our data to be rich enough to be able to pull out that structure.

When thinking about the tail of dictators don't you also have to think of the tail of good people with truly great minds you would be saving from death? (People like John von Neumann, Benjamin Franklin, etc.)

Overall, dictators are in a very tough environment with power struggles and backstabbing, lots of defecting, etc. while great minds tend to cooperate, share resources, and build upon each other.

 Obviously, there are a lot more great minds doing good than 'great minds' wishing to be world dictators. And it seems to trend in the right direction. Compare how many great smart democratic leaders there are now vs 100 years ago. Extend that line another 100 years and it seems like we'll be improving.

In a world in which a long tail dictator could theoretically work out an ironclad grasp of their country for evil, wouldn't there be thousands of truly brilliant minds with lots of global coordinated resources around the world pushing against this? (see Russia vs Ukraine for a very very simple real-world example of "1 evil guy vs the world")

So this long tail dictator has to worry about intense internal struggle/pressure but also most of the world externally pressuring them as well? I don't see how the moral brilliant minds don't just outmaneuver this dictator because they have 100x+ more people, resources, and coordination (in this theoretical future).

It's a good point that by default you'd be extending all the great minds too. Abstractly I was tracking this, but I like calling it out explicitly.

& I agree with the trend that we're improving over time. But I worry that if we'd had immortality for the last thousand years maybe we wouldn't have seen the same degree of improvement over time. The concern is if someone had achieved global dictatorship maybe that would have involved repressing all the smart good people, and preventing coordination to overthrow them.

But we're not debating if immortality over the last thousand years would have been better or not, we're looking at current times and then estimating forward, right? (I agree a thousand years ago immortality would have been much much riskier than starting today)

In today's economy/society great minds can instantly coordinate and outnumber the dictators by a large margin. I believe this trend will continue and that if you allow all minds to continue the great minds will outgrow the dictator minds and dominate the equation.

Dictators are much more likely to die (not from aging) than the average great mind (more than 50x?). This means that great minds will continue to multiply in numbers and resources while dictators sometimes die off (from their risky lifestyle of power-grabbing).

Once there are 10,000 more brilliant minds with 1,000x more resources than the evil dictators how do you expect the evil dictator to successfully power grab a whole country/the whole world?

I agree that probably you'd be fine starting today, and it's a much safer bet than starting 1,000 years ago, but is it a safer bet than waiting say another 200 years?

I'd be concerned about dictators inciting violence against precisely the people they most perceive as threats. e.g. I don't know the history of the Cultural Revolution well, but my impression is that something like this happened there.

The thing that's hard to internalize (at least I think) is that by waiting 200 years to start anti-aging efforts you are condemning billions of people to an early death with a lifespan of ~80 years. 

You'd have to convince me that waiting 200 years would reduce the risk of totalitarian lock-in so much that it offsets billions of lives that would be guaranteed to "prematurely end".

Totalitarian lock-in is scary to think about and billions of people's lives ending prematurely is just text on a screen. I would assume that the human brain can easily simulate the everyday horror of a total totalitarian world. But it's impossible for your brain to digest even 100,000,000 premature deaths, forget billions and billions.

I certainly feel like it's a very stakesy decision! This is somewhere where a longtermist perspective might be more hesitant to take risks that jeopardize the entire future to save billions alive today.

I also note that your argument applies to past cases too. I'm curious in what year you guess it would first have been good to grant people immortality?

(As mentioned in the opening post, I'm quite confused about what's good here.)

I agree, it feels like a stakesy decision! And I'm pretty aligned with longtermist thinking, I just think that "entire future at risk due to totalitarianism lock-in due to removing death from aging" seems really unlikely to me. But I haven't really thought about it too much so I guess I'm really uncertain here as we all seem to be.

"what year you guess it would first have been good to grant people immortality?"

I kind of reject the question due to 'immortality' as that isn't the decision we're currently faced with. (unless you're only interested in this specific hypothetical world). The decision we're faced with is do we speed up anti-aging efforts to reduce age-related death and suffering? You can still kill (or incapacitate) people that don't age, that's my whole point of the great minds vs. dictators.

But to consider the risks in the past vs today:

Before the internet and modern society/technology/economy it was much much harder for great minds to coordinate against evils in a global sense (thinking of the Cultural Revolution as you mentioned). So my "great-minds counter dictators" theory doesn't hold up well in the past but I think it does in modern times.

The population 200 years ago was 1/8 what is today and growing much slower so the premature deaths you would have prevented per year with anti-aging would have been much less than today so you get less benefit.

The general population's sense of morals and demand for democracy is improving so I think the tolerance for evil/totalitarianism is dropping fairly quickly.

So you'd have to come up with an equation with at least the following:
- How many premature deaths you'd save with anti-aging
- How likely and in what numbers will people, in general, oppose totalitarianism
- If there was opposition, how easily could the global good coordinate to fight totalitarianism
- If there was coordinated opposition would their numbers/resources outweigh the numbers/resources of totalitarianism
- If the coordinated opposition was to fail, how long would this totalitarian society last (could it last forever and totally consume the future or is it unstable?)

I don't buy the asymmetry of your scope argument. It feels very possible that totalitarian lock-in could have billions of lives at stake too, and cause a similar quantity of premature deaths.

Of course, it would, but if you're reducing the risk of totalitarian lock-in from 0.4% to 0.39% (obviously made up numbers) by waiting 200 years I would think that's a mistake that costs billions of lives.

I think Matt’s on the right track here. Treating “immortal dictators” as a separate scenario from “billions of lives lost to an immortal dictator” smacks of double-counting.

Really, we’re asking if immortality will tend to save or lose lives on net, or to improve or worsen QoL on net.

We can then compare the possible causes of lives lost/worsened vs gained/bettered: immortal dictators, or perhaps immortal saints; saved lives from life extension; lives less tainted by fear of death and mourning; lives more free to pursue many paths; alignment of individual self-interest with the outcome of the long-term future; the persistent challenge of hyperbolic discounting; the question of how to provide child rearing experiences in a crowded world with a death rate close to zero; the possible need to colonize the stars to make more room for an immortal civilization; the attendant strife that such a diaspora may experience.

When I just make a list of stuff in this manner, no individual item jumps out at me as particularly salient, but the collection seems to point in the direction of immortality being good when confined to Earth, and then being submerged into the larger question of whether a very large and interplanetary human presence would be good.

I think that this argument sort of favors a more near-term reach for immortality. The smaller and more geographically concentrated the human population is by the time it’s immortal, the better able it is to coordinate and plan for interplanetary growth. If humanity spreads to the stars, then coordination ability declines. If immortality is bad in conjunction with interplanetary civilization, the horse is out of the barn.

One question is whether coalitions of pro-social people are better at deferring power to good successors than dictators are at ensuring that they have equally bad/dictatorial successors. If you believe that Democracies are unlikely to “turn bad,” shouldn’t you be in favor of reducing the variance to the lifetime of dictatorships?

The discussion here is very abstract, so I’m unsure if I disagree because I picture a different pathway to giving people extreme longevity or whether I disagree with your general world model and reasoning. In any case, here are some additional related thoughts:

  • You point to a trendline with the share of Democracies increasing, but that’s not the same as seeing improvements in leaders’ quality (some democracies may be becoming increasingly dysfunctional). I’m open to the idea that world leaders are getting better, but if I had to make an intuition-driven judgment based on the last few years, I wouldn’t say so.
  • It’s inherently easier to attain and keep power by any means necessary with zero ethics vs. gaining it to do something complicated and altruistic (and staying ethical along the way and keeping people alive, etc.)
  • There’s another asymmetry where it’s often easier to destroy/attack/kill than build something. Those brilliant people coordinating to keep potential dictators in check, they may not be enough. If having no ethics means you get to use super powers, then the people with ethics are in trouble (as Owen points out, they're the ones who will die first or have their families imprisoned). (Related: I think it’s ambiguous whether Putin supports your point. The world is in a very precarious situation now because of one tyrant. Lots of people will starve even if nuclear escalation can be avoided.)
  • Some personality pathologies like narcissism and psychopathy seem to be increasing lately, tracking urbanization rates and probably other factors. Evolutionarily, higher death rates at the hands of upset others seem to be “worth it” for these life-history strategies.
  • People can be “brilliant” on some cognitive dimensions but fail at defense against dark personality types. For instance, some otherwise brilliant people may be socially naive.
  • Outside of our EA bubble, it doesn’t look like the world is particularly sane or stable. Great/brilliant people cannot easily do much in a broken system. And maybe the few brilliant people who take heroic responsibility are outnumbered by too many merely mediocre people who are easily corrupted and easily self-deceive.

That said, I see some important points in favor of your more optimistic picture: 

  • There are highly influential EA orgs whose leadership and general culture I’m really impressed by. (This doesn’t go for all highly influential EA orgs.)
  • I expect EA to continue to gain more influence over time.

This is a good comment. I'd like to respond but it feels like a lot of typing... haha

 

but that’s not the same as seeing improvements in leaders’ quality

I just mean the world is trending towards democracies and away from totalitarianism.

 

It’s inherently easier to attain and keep power by any means necessary with zero ethics

Yes, but 100x easier? Probably not. What if the great minds have 100x the numbers and resources? Network effects are strong

 

There’s another asymmetry where it’s often easier to destroy/attack/kill than build something.

Same response as above

 

I think it’s ambiguous whether Putin supports your point. The world is in a very precarious situation now because of one tyrant.

My point is that the vast majority of the world immediately pushed back on Putin much harder than people thought. This backs up my trend that people are less tolerant of totalitarianism than they were 100 years ago. We are globally trying (and succeeding) to set stronger norms against inflicting violence and oppression.
 


Some personality pathologies like narcissism and psychopathy seem to be increasing lately, tracking urbanization rates and probably other factors.

I'm guessing it will be somewhat easier to reverse these trends in a less scarcity-based society in the future, especially when we have a better handle on mental health from all angles. And the increases are probably not enough to matter in the wider question of great minds vs dictators.

 

People can be “brilliant” on some cognitive dimensions but fail at defense against dark personality types. For instance, some otherwise brilliant people may be socially naive.

The great minds can just outnumber the dictators in numbers and in resources, but again network effects can fight against this because each individual person doesn't have to succeed against dictators, the whole global fight for good has to collectively succeed.

 

Outside of our EA bubble, it doesn’t look like the world is particularly sane or stable.

The world definitely seems to be trending towards saner and more stable though.

Re. term limits on jobs, I think this is a cool idea. But I don't know that I'd expect that to be implemented, which makes want to disambiguate between the questions:

  1. Would the ideal society have immortality?
  2. Would immortality make our society better?

My guesses would be "yes" to A, and a very tentative "no" to B. Of course if there was a now-or-never moment of choosing whether to get immortality, one might still like to have it now, but it seems like maybe we'd ideally wait until society is mature enough that it can handle immortality well before granting it.

I don't have well-formed views on these questions myself, but yeah, I think #2 is a more important question than #1 right now.

Hi Owen! The advantages and limitations of immortality needs more thought as our society is starting to more seriously invest in anti-aging.

One of my challenges with this post is that it claims to provide an "anti-immortality case," but then proceeds to simply list some problems that might arise if people were immortal.

To make an anti-X case, you need to do more than list some problems with X. You need to make a case that the problems are insurmountably bad or risky, even after a consideration of possible solutions. Alternatively, you can make a case that the downsides inevitably outweigh the benefits, despite a presumption that people will work hard to mitigate those downsides and maximize the benefits.

Your article is specifically about immortality, or at least adding perhaps 10,000 years to average human life expectancy. That's a different topic from more realistic incremental anti-aging efforts. But that also gets into a Sorites paradox, so it seems worth bringing up incremental efforts as well.

Note: what follows isn't an attempt to bombard you with Gish-gallop-style questions. It's just food for thought.

If I could push a button to increase lifespans 100x or 10,000x I would be much more hesitant. 

  • If you could push a button to give everybody one extra year of youthful life, how many times would you press it?
  • What if you had the option of pushing that button, or not pushing it, every year?
  • What if the world's life sciences and engineering community was paid to push that button every year? Would we be glad that they had such a button to push?
  • What if the world's governments were in charge of deciding whether or not to push the button? Would we like them to have control over the button?
  • Now consider the same questions, but for a button that removes one year of old age from everybody's lifespans. Would you be happy that the relevant people had access to such a button? How many times should it be pressed?
  • Finally, imagine everybody had a button they could individually press in order to extend their youthful life by one year. They can press their buttons once a year. You have a button that reduces the effectiveness of their buttons by half, or that disables their buttons entirely. Would you press your button-disabling button?

If you don't think anybody should press the life extension button, but you also don't think that anybody should press the life-shortening button, then we need an explanation for why you believe people are living the perfect amount of time for overall human wellbeing.

I think that anti-aging research is most likely to produce "one year of life extension" buttons, every now and then. Every time such a button is pressed, society will have a chance to observe the effects and adjust in advance of the next button press. This specific anti-aging scenario is the one that I think merits the most focused attention.

Good questions! I could give answers but my error bars on what's good are enormous.

(I do think my post is mostly not responding to whether longevity research is good, but to what the appropriate attitudes/rhetoric towards death/immortality are.)

I felt quite frustrated by this post, because the preponderance of EA discourse is already quite sceptical of anti-ageing interventions (as you can tell by the fact that no major EA funder is putting significant resources into it). I would in fact claim that the amount of time and ink spent within EA in discussing reasons not to support anti-ageing interventions significantly exceeds that spent on the pro side.

So this post is repeating well-covered arguments, and strengthening the perception that "EAs don't do longevity", while claiming to be promoting an under-represented point of view.

Thanks for voicing the frustration!

I regarded the post not really as a point about cause prioritization (I agree longevity research doesn't get much attention, and I think possibly it should get more), but about rhetoric. "Defeating death" seems to be a common motif e.g. in various rationalist/EA fic, or the fable of the dragon tyrant. I just wanted some place which assembled the arguments that make me feel uneasy about that rhetoric. I agree that a lot of my arguments are not really novel intellectual contributions (just "saying the obvious things") and tried to convey that with the start of the post. (It's also quite possible there is some other post which captures what I'm trying to do here better, but that I was unaware of it.)

I do claim that it's good to have articulations of things like this even if the case is reasonably well known (I don't really know what the status of that is). I'm not sure whether you disagree with that. In any case from your response I think I didn't do enough to convey the intended type signature of the post, and I'm sorry about that.

Thanks, Owen! I do feel quite conflicted about my feelings here, appreciate your engagement. :)

I do claim that it's good to have articulations of things like this even if the case is reasonably well known

Yeah, I agree with this -- ultimately it's on those of us more on the pro-immortality side to make the case more strongly, and having solid articulations of both sides is valuable. Also flagging that this...

Would I eventually like to move to a post-death world? Probably, but I'm not certain. For one thing I think quite likely the concept of "death" will not carve reality at its joins so cleanly in the future.

...seems roughly right to me.

Less extremely, there is the adage that science progresses one funeral at a time

Paul Abramson, Ronald Inglehart, and others, have similarly suggested that value changes largely occur via generational replacement. See discussion here.

I think this is really difficult to truly assess because there is a huge confounder. The more you age the worse your memory gets, your creativity decreases, your ability to focus decreases, etc., etc.

If all of that was fixed with anti-aging it may not be true that science progresses one funeral at a time because the people at the top of their game can keep producing great work instead of becoming geriatric while still holding status/power in the system.

Also, it could be a subconscious thing: "why bother truly investigating my beliefs at age 70, I'm going to die soon anyway, let me just continue with the inertia until I retire soon"

Also, this seems possible to fix with better institutional structures/incentives. Academia is broken in many ways, this is just one of them.

My comment wasn't about science but about political and moral values. I doubt that the reason people don't change them more is cognitive decline, since it seems that slowness to change them sets in long before cognitive decline sets in.

The same comment also applies to:

"why bother truly investigating my beliefs at age 70, I'm going to die soon anyway, let me just continue with the inertia until I retire soon"

It's not just people who are 70+ who are slow to change their moral and political views.

since it seems that slowness to change them sets in long before cognitive decline sets in.

I don't think this is a dominant factor, but my impression is that cognitive decline sets in very early. (e.g. reaction speed peaks in mid-20s).

There was a recent paper that challenged that view. And crystallised intelligence likely peaks later than fluid intelligence. But yeah even if it turned out to be a non-trivial factor it's likely not  a dominant one until quite late.

Some quick reactions to your points:

  • Dying from ageing-related diseases like Alzheimer's and Cancer seems a lot worse than starvation. If I had to choose between starving to death in a few weeks vs. losing the ability to recognize my family for years, I would always go with starving (just personal choice though)
  • At this point, I think all "gerontocracy/value-lock in/immortal dictator" arguments fall short because they imagine that immortality does not change human psychology. In the simplest case, old people are not less curious because of chronological ageing, but because of neuroinflammation due to ageing. (curiosity is measured in a lot of mice studies, but also there is some observational evidence in humans)
    • What I am trying to say is, that anti-ageing technology will reset the curiosity of the elderly to that of a 20-year-old. If they remain conservative, then it's probably due to good reason (they might have known Chesterton personally when he put up the fence)
    • More tentative, but: I believe that undefined lifespans (which is a more realistic thing than "immortality") change the underlying game theory of human interaction: it shifts every decision from a prisoner's dilemma to the infinite prisoner's dilemma, where cooperation is optimal
      • If this is true then the optimal time for lifespan to become undefined could have been way in the past, because all short term problems would have been washed out by better public goods funding. (This is the "stakes for caring about X-risk are higher if you have skin in the game" argument)

I've sometimes thought about if 'immortality' is the right framing, at least for the current moment. Like AllAmericanBreakfast points out, I think that anti-ageing research is unlikely to produce life extensions in the 100x to 1000x range all at once.

In any case, even if we manage to halt ageing entirely, ceteris paribus there will still be deaths from accidents and other causes. A while ago I tried a fermi calculation on this, I think I used this data (United States, 2017). The death rate for people between 15-34 is ~0.1%/year, this rate of death would put the median lifespan at ~700 years (Using X~Exp(0.001)). 

Probably this is an underestimate of lifespan - accidental death should decrease (safety improvements, of which self-driving cars might be significant given how many people die of road accidents), curing ageing might have positive effects on younger people as well, and other healthcare improvements should occur, and people might be more careful if they're theoretically immortal(?). However, I think this framing poses a slightly different question:

Do we prefer that more people:

  • Live shorter lives and die of heart disease/cancer/respiratory disease*, or
  • Live (possibly much) longer lives and die of accidents/suicide/homicide

I don't know how I feel about these. I think in the theoretical case of going immediately from current state to immortality I'd be worried about Chesterton's-fence-y bad results - not that someone put ageing into place, but I'd expect surprising and possibly unpleasant side effects of changing something so significant**.

*I inferred from the data I linked above that heart disease and cancer are somewhat ageing-related, I'm not sure if this is true

**The existence of the immortal jellyfish Turritopsis dohrnii, implies that a form of immortality was evolvable, which in turn might imply that there's some reason evolution didn't favour more immortal things/things that tended slightly more towards immortality.

Thanks for pointing out this small elephant in the room. I think that, even if we could solve problems like the "ossification of values" (idk, maybe psychodelics, or some special therapy) or the possibility of immortal tyrants, the underlying problem is that some types of power (like wealth) accumulate with time... as usual, I think SMBC summarizes it in just one panel: https://www.smbc-comics.com/comic/social-longevity

In "Three worlds collide", the rationalist character makes it clear for the captain of the ship that the latter has to make the important decisions, because it's not up to the elders, but to the young, to command. I don't think this necessarily applies to our current societies, but I can see why it makes sense in contexts with extreme longevity. Unfortunately, it looks like we think it might be easier to solve senescence than intergenerational cooperation.

I like the point that we should not only consider the individual case. I guess people's hope is that, 
in a future where everything goes maximally well, bad externalities like "old people's beliefs become ossified" can be addressed by some really clever governance structure. 

On the individual case:

In my post "The Life-Goals Framework: How I Reason About Morality as an Anti-Realist," I mention longevity/not wanting to die a couple of times as an example of a "life goal." ("Life goal" is a term I introduce that means something like "an objective you care about terminally, so much that you have formed [or want to form] an optimizing mindset around it.") In the post, I argue that it's a personal choice which (if any) life goals we adopt.

One point I make there is that by deciding that not wanting to die is immensely important to you, you adopt a new metric of scoring how well you're doing in life. That particular metric (not wanting to die) places a lot of demands on you. I think this point is related to your example where you dislike telling people (implicitly or explicitly) that they're failing if they've had a happy routine, watched their grandkids grow up and have kids of their own, and feel like they can let go rather than needing to do more in the world. 

Here some relevant quotes from my article (one theme is that the way we form life goals isn't too dissimilar from how we chose between leisure activities and adopt lifestyles or careers): 

In the same way different people feel the most satisfied with different lifestyles or careers, people’s intuitions may differ concerning how they’d feel with the type of identity (or mindset) implied by a given life goal.

[...]

For the objective “valuing longevity,” it’s worth noting how life-altering it would be to adopt the corresponding optimizing mindset. Instead of trusting your gut about how well life is going, you’d have to regularly remind yourself that perceived happiness over the next decades is entirely irrelevant in the grand scheme of things. What matters most is that you do your best to optimize your probability of survival. People with naturally high degrees of foresight and agency (or those with somewhat of a “prepper mentality”) may actively enjoy that type of mindset – even though it conflicts with common sense notions of living a fulfilled life. By contrast, the people who are happiest when they enjoy their lives moment-by-moment may find the future-focused optimizing mindset off-putting.

[...]

Earlier on, I wrote the following about how we choose leisure activities [this was in the context of discussing whether to go skiing or spend the weekend cozily at home]:

>[...] [W]e tend to have a lot of freedom in how we frame our decision options. We use this >freedom, this reframing capacity, to become comfortable with the choices we are about to >make. In case skiing wins out, then “warm and cozy” becomes “lazy and boring,” and “cold and >tired” becomes “an opportunity to train resilience / apply Stoicism.” This reframing ability is a >double-edged sword: it enables rationalizing, but it also allows us to stick to our beliefs and >values when we’re facing temptations and other difficulties.

The same applies to how we choose self-oriented life goals. On one side, there’s the appeal of the potential life-goal objective (e.g., “how good it would be to live forever” or “how meaningful it would be to have children”). On the other side, there are all the ways how the corresponding optimizing mindset would make our lives more complicated and demanding. Human psychology seems somewhat dynamic here because the reflective equilibrium can end up on opposite sides depending on each side’s momentum. Option one – by committing to the life goal in question, “complicated and demanding” can become “difficult but meaningful.” Alternatively, there’s option two. By deciding that we don’t care about the particular life-goal objective, we can focus on how much we value the freedom that comes with it. In turn, that freedom can become part of our terminal values. (For example, adopting a Buddhist/Epicurean stance toward personal death can feel liberating, and the same goes for some other major life choices, such as not wanting children.)

These quotes describe how people form their objectives, the standards by which they measure their lives. Of course, someone can now say "An objective being 'demanding' isn't necessarily a good reason to give up on it. What about the possibility that some people form ill-inspired life goals because they don't know/don't fully realize what they're giving?"

I talk about this concern ("ill-inspired life goals") in this section of the post. 

I have a paper from a few years ago arguing a similar point. https://iopscience.iop.org/article/10.1088/0031-8949/89/12/128005;jsessionid=7EACB368D908AD6B0EC00F6688E725DD.c2.iopscience.cld.iop.org#metrics 

From the abstract: 

This article argues that this research program [longevity treatement] is much more risky or less beneficial than its proponents argue. In particular, they tend to underestimate the concerns associated with the potentially drastic population growth that longevity treatment could cause. The ethical benefit often ascribed to longevity treatment is that such treatment would add more subjective life-years that are worth living. However, in light of contemporary environmental problems, such an increase of the human population might be reckless. Drastically reducing fertility to reduce risks associated with environmental stress would make the benefits of such technology much less compelling.

Send me an email for the pdf if you are interested. 

Conditional on immortality being achievable, we might also care about the hands in whom it is achieved. And if there isn't much altruistic investment, it might by default fall into the hands of inordinately selfish billionaires.

The (increasing) majority of the world's population already believes that their soul is immortal and that their actions in this life will determine the quality of their afterlife/rebirth.  

I'm interested to hear religious perspectives on the value of life extension and immortality as I expect this will be one of the most important factors in the adoption of technological advances in longevity.

Additional arguments to explore:

  • Replaceablitiy: it might be better on the margin to focus on increasing fertility relative to slowing down ageing on ground of cost effectiveness, tractability and generating comparable (or more) expected happy life. This is also limit the problem of immortal dictators and bad ideas.
  • Hard to argue that reducing ageing beats X-risks. Unless one thinks slowing down ageing is a good intervention to reduce X-risks. We might care more about the long term future if we live longer but human psychology is such that we might discount our future selves anyway and X-risks are actually not that much of a "long term" issue:  AI X-risks, nuclear war and other pandemics have a descent chance to happen in a couple of decades  (so most people alive today have a descent chance to live through one of them  anyway).
  • Human aligned AI will help us with anti-ageing research: prioritise aligning AI and then ageing will be easier to solve
  • Replaceablitiy with happy AIs: Biological organisms might not be a cost effective way to create a lot of happy life in the world. Resources are better allocated in building happy simulations (in the form of brain emulations or human aligned AIs) that can populate the world more effectively, live arbitrary long lives in which they can self modify (as long as they stay aligned)

Existential risks, and AI considerations aside:

Ageing generates an important amount of suffering (old bodies in particular tends to be painful for a while) and might be one of the dominant burden on healthcare systems, I'm wondering for ex how ageing compares with standard global health and development issues at least maybe it could plausibly compare or even beat them in terms of scale (would like to see a cost benefit analysis of that).

As a software engineer, I resonate with this post. In software engineering, I regularly have to make the decision of whether to improve existing software or replace it with a new solution.

Obvious caveats: humans are incomparable to pieces of software, and human genes evolve much more slowly than JavaScript frameworks.

I think software engineers succumb to the temptation to "start with a green field and clean slate" a bit too often. We tend to underestimate the value that lies in tried and tested software (and overestimate the difficulty of iterative improvements). Similarly, I think that I personally might underestimate the value, wisdom, and moral weight of existing people, particularly if we could solve most health problems. Yet I do believe that after some amount of life years, the value of a new life -- a newborn who can learn everything from scratch and benefit from all the goodness that humanity has accumulated before its birth -- exceeds the value of extending the existing life.


The trade-off is even more salient in animal agriculture. Obvious caveat: humans are incomparable to cows, and human genes evolve much more slowly than cow genes.

Because cows are bred to a quite extreme degree, a cow born today has substantially "better" genes than a cow born 10 years ago. This is one of the reasons why it is economically profitable to kill a dairy cow after 4-6 years rather than let it live to its full lifespan.

Your last bullet point reminded me of this anecdata: Several people that stand to inherit a meaningful amount of money have told me that they’re being quiet about their charitable giving with their family until they’ve actually inherited funds for feat of being disinherited (they believe that their family are very happy/strongly optimising to give them a lot of money for using on themselves and their children but not onboard with altruistic spending).

Humanity and society are weird. By some cosmic fluke involving brains and thumbs we figured out how to mold the landscape to grow our food and later on figured out how to access million-year old energy deposits in the lithosphere.

We are less than two centuries out from the beginning of industrialized society and we have no clue how to balance energy and resource flows to sustain civilization beyond a few more centuries. And now some of us apes are thinking, "hey, how about we don't die?" as if the current weird state of things somehow represents some new normal of human existence.

There has been ample debate around "strong sustainability" vs. "weak sustainability", which centers on how much technological substitution can overcome increasing environmental pressures. People have been using specific, limited examples of weak sustainability being true (see debates around Limits to Growth) to argue against strong sustainability. Its one thing to argue that we can change planetary limits / carrying capacity, and another to say that those limits don't exist. Limits exist; that falls out of some basic thermodynamics.

Pursuing life extension beyond a few centuries seems reckless without figuring out how to do strong sustainability first. With limits, resources are zero-sum beyond some geologic replenishment rate; people living longer trade off against other people, non-human animal, and plant life, or it buys down the resources available to people in the future. I would expect longtermists to be especially cautious about how reckless life-extension could be given limits.

Given that the annual utility is the product between the population size and annual utility per capita, which depends on the real GDP per capita, I think it would be interesting to explore how life-extension interventions impact fertility rate and economic growth.

I make a slightly different anti-immortality case here:

https://harsimony.wordpress.com/2020/11/27/is-immortality-ethical/

Summary: At a steady state of population, extended lifespan means taking resources away from other potential people. Technology for extended life may not be ethical in this case. Because we are not in steady state, this does not argue against working on life extension technology today.

Don't most dictators pass the power to their children or someone close to them anyway? And are these people not highly likely to remain dictators as well? Is it really the case that dictatorships are likely to become democracies when the dictator dies of old age? I find these are not trivial to answer.

[comment deleted]1
0
0
Curated and popular this week
Relevant opportunities