This is a linkpost for https://schwitzsplinters.blogspot.com/2022/01/against-longtermism.html

Eric Schwitzgebel, a philosophy professor at UC Riverside, just posted a criticism of longtermism on his blog.  In short, his arguments are:

  1. We live in a dangerous time in history, but there's no reason to think that the future won't be at least as dangerous.  Thus, we'll likely go extinct sooner rather than later, so the expected value of the future is not nearly as great as many longtermists make it out to be.
  2. It's incredibly hard to see how to improve the longterm.  For example, should we almost destroy ourselves (e.g., begin a cataclysmic yet survivable nuclear war) to avoid the risks from even more dangerous anthropogenic threats?
  3. Apart from temporal discounting, there are reasonable ethical positions from which one might still have greater reason to help those temporally closer than farther.  For example, confucianism says we should focus more on those "closer" to us in the moral circle (friends, family, etc.) than those "farther" (including, presumably, future people).
  4. There's a risk that longtermism could make people ignore the plight of those currently suffering.  (Although, Schwitzgebel acknowledges, prominent longtermists like Ord also work in more neartermist areas.)

Overall, the critiques don't seem to be original.  The third argument seems to me to be a reminder that it is important to examine the case for longtermism from other ethical perspectives. 

If you enjoyed reading Schwitzgebel's post, he has another EA-related post about AI alignment (as well as many posts on consciousness, e.g., in AI).

Comments4
Sorted by Click to highlight new comments since:

I blogged a response to Schwitzgebel's four objections, here.  But I'd welcome any suggestions for better responses!

Your reply to Eric's fourth objection makes an important point that I haven't seen mentioned before:

By contrast, I think there's a much more credible risk that defenders of conventional morality may use dismissive rhetoric about "grandiose fantasies" (etc.) to discourage other conventional thinkers from taking longtermism and existential risks as seriously as they ought, on the merits, to take them.  (I don't accuse Schwitzgebel, in particular, of this.  He grants that most people unduly neglect the importance of existential risk reduction.  But I do find that this kind of rhetoric is troublingly common amongst critics of longtermism, and I don't think it's warranted or helpful in any way.)

A view, of course, can be true even if defending it in public is expected to have bad consequences. But if we are going to consider the consequences of publicly defending a view in our evaluation of it, it seems we should also consider the consequences of publicly objecting to that view when evaluating those objections. 

The third argument seems to represent what a lot of people actually feel about utilitarian and longtermist ethics. They refuse to take impartiality to its logical extreme, and instead remain partial to helping people that feel nearby.

From a theoretical standpoint, there are few academic philosophers who will argue against “impartiality” or some understanding that all people have the same moral value. But in the real world, just about everyone prioritizes people who are close to them: family, friends, people of the same country or background. Often this is not conceived of as selfishness — my favorite Bruce Springsteen song, “Highway Patrolman”, sings the praises of a police officer who puts family above country and allows his brother escape the law.

Values are a very human question, and there’s as much to learn from culture and media as there is from academic philosophy and logical argument. Perhaps that’s merely the realm of descriptive ethics, and it’s more important to learn the true normative ethics. Or, maybe the academics have a hard time understanding the general population, and would benefit from a more accurate picture of what drives popular moral beliefs.

[anonymous]2
0
0

Thanks for sharing this!

Quoting from the article (underline added):

First, it's unlikely that we live in a uniquely dangerous time for humanity, from a longterm perspective. Ord and other longtermists suggest, as I mentioned, that if we can survive the next few centuries, we will enter a permanently "secure" period in which we no longer face serious existential threats. Ord's thought appears to be that our wisdom will catch up with our power; we will be able to foresee and wisely avoid even tiny existential risks, in perpetuity or at least for millions of years. But why should we expect so much existential risk avoidance from our descendants? Ord and others offer little by way of argument.

[...]

You might suppose that, as resources improve, people will grow more cooperative and more inclined toward longterm thinking. Maybe. But even if so, cooperation carries risks. For example, if we become cooperative enough, everyone's existence and/or reproduction might come to depend on the survival of the society as a whole. The benefits of cooperation, specialization, and codependency might be substantial enough that more independent-minded survivalists are outcompeted. If genetic manipulation is seen as dangerous, decisions about reproduction might be centralized. We might become efficient, "superior" organisms that reproduce by a complex process different from traditional pregancy, requiring a stable web of technological resources. We might even merge into a single planet-sized superorganism, gaining huge benefits and efficiencies from doing so. However, once a species becomes a single organism the same size as its environment, a single death becomes the extinction of the species. Whether we become a supercooperative superorganism or a host of cooperative but technologically dependent individual organisms, one terrible miscalculation or one highly unlikely event could potentially bring down the whole structure, ending us all.

A more mundane concern is this: Cooperative entities can be taken advantage of. As long as people have differential degrees of reproductive success, there will be evolutionary pressure for cheaters to free-ride on others' cooperativeness at the expense of the whole. There will always be benefits for individuals or groups who let others be the ones who think longterm, making the sacrifices necessary to reduce existential risks. If the selfish groups are permitted to thrive, they could employ for their benefit technology with, say, a 1/1000 or 1/1000000 annual risk of destroying humanity, flourishing for a long time until the odds finally catch up. If, instead, such groups are aggressively quashed, that might require warlike force, with the risks that war entails, or it might involve complex webs of deception and counterdeception in which the longtermists might not always come out on top.

The point about cooperation carrying risks is interesting and not something I've seen elsewhere. 

More from ag4000
Curated and popular this week
Relevant opportunities