Hide table of contents

Abstract

Let strong longtermism be the thesis that in a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best. If this thesis is correct, it suggests that for decision purposes, we can often simply ignore shorter-run effects: the primary determinant of how good an option is (ex ante) is how good its effects on the very long run are.

This paper sets out an argument for strong longtermism. We argue that the case for this thesis is quite robust to plausible variations in various normative assumptions, including relating to population ethics, interpersonal aggregation and decision theory. We also suggest that while strong longtermism as defined above is a purely axiological thesis, a corresponding deontic thesis plausibly follows, even by non-consequentialist lights.

Introduction

A striking fact about the history of civilisation is just how early we are in it. There are 5000 years of recorded history behind us, but how many years are still to come? If we merely last as long as the typical mammalian species, we still have 200,000 years to go; there are a further one billion years until the Earth is sterilized by the Sun; and trillions of years until the last conventional star formations. Even on the most conservative of these timelines, we have progressed through a tiny fraction of recorded history. If humanity’s saga were a novel, we would still be on the very first page.

Normally, we pay scant attention to this fact. Political discussions are centered around the here and now, focused on the latest scandal or the next election. When a pundit takes a ‘long-term’ view, they talk about the next five or ten years. We essentially never think about how our actions today might influence civilisation in hundreds of thousands of years hence.

We believe that this neglect of the very long-run future is a serious mistake. An alternative perspective is given by longtermism, according to which we should be particularly concerned with ensuring that the far future goes well. In this article, we go further, arguing for strong longtermism: the view that impact on the far future is the most important feature of our actions today. We will defend both axiological and deontic versions of this thesis.

Humanity, today, is like an imprudent teenager. The most important feature of the most important decisions that a teenager makes, like what subject to study at university and how diligently to study, is not the enjoyment they will get in the short term, but how those decisions will affect the rest of their life.

The structure of the paper is as follows. Section 2 sets out more precisely the thesis we will primarily defend: axiological strong longtermism (ASL). This thesis states that, in the most important decision situations facing agents today, (i) every option that is near-best overall is near-best for the far future, and (ii) every option that is near-best overall delivers much larger benefits in the far future than in the near future.

We primarily focus on the decision situation of a society deciding how to spend its resources. We use the cost-effectiveness of antimalarial bednet distribution as an approximate upper bound on attainable near-future benefits per unit of spending. Towards establishing a lower bound on the highest attainable far-future expected benefits, section 3 argues that there is, in expectation, a vast number of sentient beings in the future of human-originating civilisation. Section 4 then argues, by way of examples involving existential risk, that the project of trying to beneficially influence the course of the far future is sufficiently tractable for ASL(i) and ASL(ii) to be true of the above decision situation. Section 5 argues that the same claims and arguments apply equally to an individual deciding how to spend resources, and an individual choosing a career. We claim these collectively constitute the most important decision situations facing agents today, so that ASL follows.

The remainder of the paper explores objections and extensions to our argument.

Section 6 argues that the case for ASL is robust to several plausible variations in axiology, concerning risk aversion, priority to the worse off, and population ethics. Section 7 addresses the concern that we are clueless about the very long-run effects of our actions. Section 8 addresses the concern that our argument turns problematically on tiny probabilities of enormous payoffs.

Section 9 turns to deontic strong longtermism. We outline an argument to the effect that, according to any plausible non-consequentialist moral theory, our discussion of ASL also suffices to establish an analogous deontic thesis. Section 10 summarises.

The argument in this paper has some precedent in the literature. Nick Bostrom (2003) has argued that total utilitarianism implies we should maximise the chance that humanity ultimately settles space. Nick Beckstead (2013) argues, from a somewhat broader set of assumptions, that “what matters most” is that we do what’s best for humanity’s long-term trajectory. In this paper, we make the argument for strong longtermism more rigorous, and we show that it follows from a much broader set of empirical, moral and decision-theoretic views. In addition, our argument in favour of deontic strong longtermism is novel.

We believe that strong longtermism is of the utmost importance: that if society came to adopt the views we defend in this paper, much of what we prioritise in the world today would change.

Read the rest of the paper

Comments1
Sorted by Click to highlight new comments since:

Just in case people stumble upon this post in future: It's possible you'd be interested in some of the thoughts I wrote on this paper elsewhere on the Forum.

Curated and popular this week
Relevant opportunities