Hide table of contents

If all goes well, human history is just beginning. Our species could survive for billions of years, reaching heights of flourishing unimaginable today. But this vast future is at risk. We have gained the power to destroy ourselves, and all our potential, forever, and we haven’t yet gained the wisdom to ensure that we don’t. 

Toby Ord, an Oxford moral philosopher and research fellow at the Future of Humanity Institute, expands on these ideas and discusses adopting the perspective of humanity — one of the major themes of his forthcoming book. This talk was filmed at EA Global: London 2019.

Below is a transcript of the talk, which we've lightly edited for clarity. 

Editor's note: The Q&A comes from Toby's delivery of this talk at EA Global: London 2019.

The Talk

In the grand course of human history, where do we stand? Could we be living at one of the most influential times that will ever be? Our species, Homo Sapiens, arose on the savannas of Africa 200,000 years ago. What set us apart from the other animals was both our intelligence and our ability to work together, to build something greater than ourselves. From an ecological perspective, it was not a human that was remarkable, but humanity.

Crucially, we were able to cooperate across time, as well as space. If each generation had to learn everything anew, then even a crude iron shovel would have been forever beyond our reach. But we learned from our ancestors, added innovations of our own, and passed this all down to our children. Instead of dozens of humans in cooperation, we had tens of thousands, cooperating across the generations, preserving and improving ideas through deep time. Little by little, our knowledge and our culture grew.

At several points in humanity's long history, there has been a great transition — a change in human affairs that accelerated our progress and shaped everything that would follow. 

Ten thousand years ago was the Agricultural Revolution. Farming could support 100 times as many people on the same piece of land, making much wider cooperation possible. Instead of a few dozen people working together, we could have millions. This allowed people to specialize in thousands of different trades. There were rapid developments in institutions, culture, and technology. We developed writing, mathematics, engineering, law. We established civilization. 

Four hundred years ago was the Scientific Revolution. The scientific method replaced a reliance on perceived authorities with careful observation of the natural world, seeking simple and testable explanations for what we saw. The ability to test and discard bad explanations helped us break free from dogma, and for the first time, allowed the systematic creation of knowledge about the workings of nature. Some of this newfound knowledge could be harnessed to improve the world around us. So the accelerated accumulation of knowledge brought with it an acceleration of technological innovation, giving humanity increasing power over the natural world. 

Two hundred years ago was the Industrial Revolution. This was made possible by the discovery of immense reserves of energy in the form of fossil fuels, allowing us access to a portion of the sunlight that shone upon the earth over millions of years. Productivity and prosperity began to accelerate, and a rapid sequence of innovations ramped up the efficiency, scale, and variety of automation, giving rise to the modern era of sustained growth.

But there has recently been another transition that I believe is more important than any that have come before. With the detonation of the first atomic bomb, a new age of humanity began. At that moment, a rapidly accelerating technological power finally reached the threshold where we might be able to destroy ourselves — the first point where the threat to humanity from within exceeded the threats from the natural world. A point where the entire future of humanity hangs in the balance. Where every advance our ancestors have made could be squandered, and every advance our descendants may achieve could be denied. 

These threats to humanity and how we address them define our time. The advent of nuclear weapons posed a real risk of human extinction in the 20th century.

With the continued acceleration of technology, and without serious efforts to protect humanity, there is strong reason to believe the risk will be higher this century, and increase with each century that technological progress continues. Because these anthropogenic risks outstrip all natural risks combined, they set the clock on how long humanity has left to pull back from the brink. If I'm even roughly right about their scale, then we cannot survive many centuries with risk like this. It is an unsustainable level of risk. Thus, one way or another, this new period is unlikely to last more than a small number of centuries. Either humanity takes control of its destiny, and reduces the risk to a sustainable level, or we destroy ourselves.

Consider human history as a grand journey through the wilderness. There are wrong turns and times of hardship, but also times of sudden progress and heady views. In the middle of the 20th century, we came through a high mountain pass and found that the only route onward was a narrow path along the cliff side, a crumbling ledge on the brink of a precipice. Looking down brings a deep sense of vertigo. If we fall, everything is lost. We do not know just how likely we are to fall. But it is the greatest risk to which we have ever been exposed. This comparatively brief period is a unique challenge in the history of our species.

Our response to it will define our story. Historians of the future will name this time, and school children will study it. But I think we need a name now. I call it “the precipice.” The precipice gives our time immense meaning. In the grand course of history, if we make it that far, this is what our time will be remembered for: for the highest levels of risk and for humanity opening its eyes, coming into its maturity, and guaranteeing its long and flourishing future. This is the meaning of our time.

I'm not glorifying our generation, nor am I vilifying us. The point is that our actions have uniquely high stakes. Whether we are great or terrible will depend upon what we do with this opportunity. I hope we live to tell our children and grandchildren that we did not stand by, but used this chance to play the part that history gave us.

Humanity's future is ripe with possibility. We've achieved a rich understanding of the world we inhabit, and a level of health and prosperity of which our ancestors could only dream. We have begun to explore the other worlds and heavens above us, and to create virtual worlds completely beyond our ancestors’ comprehension. We know of almost no limits to what we might ultimately achieve. 

Human extinction would foreclose our future. It would destroy our potential. It would eliminate all possibilities but one: a world bereft of human flourishing. Extinction would bring about this failed world and lock it in forever. There would be no coming back.

But it is not the only way our potential could be destroyed. Consider a world in ruins, where a catastrophe has done such damage to the environment that civilization has completely collapsed and is unable to ever be reestablished. Even if such a catastrophe did not cause our extinction, it would have a similar effect on our future. The vast realm of futures currently open to us would have collapsed to a narrow range of meager options. We would have a failed world with no way back. 

Or consider a world in chains, where the entire world has become locked under the rule of an oppressive totalitarian regime, determined to perpetuate itself. If such a regime could be maintained indefinitely, then descent into this totalitarian future would also have much in common with extinction — it would offer just a narrow range of terrible futures remaining and no way out. 

What all of these possibilities have in common is that humanity's once soaring potential would be permanently destroyed. [It would mean] not just the loss of everything we have, but everything we could have ever achieved. Any such outcome is called an “existential catastrophe” and the risk of it occurring an “existential risk.”

There are different ways of understanding what makes an existential catastrophe so bad. In The Precipice, I explore five different moral foundations for the importance of safeguarding humanity from existential risks:

1. Our concern could be rooted in the present — the immediate toll such a catastrophe would take on everyone alive at the time it struck. 
2. It could be rooted in the future, stretching so much further than our own moment — everything that would be lost. 
3. It could be rooted in the past, on how we would fail every generation that came before us. 
4. We could also make a case based on virtue, on how by risking our entire future, humanity itself displays a staggering deficiency of patience, prudence, and wisdom. 
5. We could make a case based on our cosmic significance, on how this might be the only place in the universe where there's intelligent life, the only chance for the universe to understand itself, on how we are the only beings who can deliberately shape the future toward what is good or just.

Thus, the importance of protecting humanity's potential draws support from a wide range of ideas and moral traditions. I will say a little more about the future and the past. 

The case based on the future is the one that inspires me most. If all goes well, human history is just beginning. Humanity is about 200,000 years old, but the Earth will remain habitable for hundreds of millions more — enough time for millions of future generations. Enough to end disease, poverty, and injustice forever. Enough to create heights of flourishing unimaginable today. And if we could learn to reach out further into the cosmos, we could have more time yet, trillions of years, to explore billions of worlds. Such a lifespan places present-day humanity in its earliest infancy. A vast and extraordinary adulthood awaits. This is the long-termist argument for safeguarding humanity's potential: Our future could be so much longer and better than our fleeting present.

There are actions that only our generation can take to affect that entire span of time. This could be understood in terms of all of the value in all of the lives in every future generation (or in many other terms), because almost all of humanity's life lies in the future. Therefore, almost everything of value lies in the future as well: almost all of the flourishing, almost all of the beauty, our greatest achievements, our most just societies, our most profound discoveries. This is our potential — what we could achieve if we pass the precipice and continue striving for a better world.

But this isn't the only way to make a case for the pivotal importance of existential risk. Consider our relationship to the past. We are not the first generation. Our cultures, institutions and norms, knowledge, technology, and prosperity were gradually built up by our ancestors over the course of 10,000 generations. Humanity's remarkable success has been entirely reliant on our capacity for intergenerational cooperation. Without it, we would have no houses or farms. We'd have no traditions of dance or song, no writing, no nations. Indeed when I think of the unbroken chain of generations leading to our time, and of everything they have built for us, I'm humbled. I'm overwhelmed with gratitude, shocked by the enormity of the inheritance and at the impossibility of returning even the smallest fraction of the favor. Because a hundred billion of the people to whom I owe everything are gone forever. And because what they created is so much larger than my life, than my entire generation. If we were to drop the baton, succumbing to an existential catastrophe, we would fail our ancestors in many different ways. We would fail to achieve the dreams they hoped for as they worked toward building a just world. We would betray the trust they placed in us, their heirs, to preserve and pass on their legacy. And we would fail in any duty we had to pay forward the work they did for us, to help the next generation as they helped ours.

Moreover, we would lose everything of value from the past that we might have reason to preserve. Extinction would bring with it the ruin of every cathedral and temple, the erasure of every poem in every tongue, the final and permanent destruction of every cultural tradition the earth has known. In the face of serious threats of extinction or of a permanent collapse of civilization, a tradition rooted in preserving or cherishing the richness of humanity would also cry out for action.

We don't often think of things at this scale. Ethics is most commonly addressed from the individual perspective: What should I do? Occasionally, it is considered from the perspective of a group or nation, or even more recently, from the global perspective of everyone alive today. We can take this a step further exploring ethics from the perspective of humanity — not just our present generation, but humanity over deep time, reflecting on what we achieved in the last 10,000 generations and what we may be able to achieve in the eons to come. 

This perspective is a major theme of my book. It allows us to see how our own time fits into the greater story and how much is at stake. It changes the way we see the world and our role in it, shifting our attention from things that affect the present moment to those that could make fundamental alterations to the shape of the long-term future. What matters for humanity, and what part in this plan should our generation play? What part should each of us play?

The Precipice has three chapters on the risks themselves, delving deeply into the science behind them. There are the natural risks, the current anthropogenic risks, and the emerging risks. One of the most important conclusions is that these risks aren't equal. The stakes are similar, but some risks are much more likely than others. I show how we can use the fossil record to bound the entire natural risk to about a one-in-10,000 chance per century. I judge the existing anthropogenic risk to be about 30 times larger, and the emerging risk to be about 50 times larger — roughly one in six — over the coming century. It’s like Russian roulette. 

This makes a huge difference when it comes to our priorities, though it doesn't quite mean that everyone should work on the most likely risks. We also care about their tractability and neglectedness, the quality of the opportunity at hand, and for direct work, your personal fit.

What we do with our future is up to us. Our choices will determine whether we live or die, fulfill our potential or squander our chance at greatness. We are not hostages to fortune. While each of our lives may be tossed about by external forces — a sudden illness or outbreak of war — humanity's future is almost entirely within humanity's control. In The Precipice, I examine what I call “grand strategy for humanity.” I ask, “What kind of plan would give humanity the greatest chance of achieving our full potential?” 

I divide things into three phases. The first great task for humanity is to reach a place of safety, a place where existential risk is low and stays low. I call this “existential security.” This requires us to do the work commonly associated with reducing existential risk by working to defuse the various threats. It also requires putting in place the norms and institutions to ensure existential risks stay low forever. This really is within our power. There appear to be no major obstacles to humanity lasting many millions of generations. If only that were a key global priority! There are great challenges in getting people to look far enough ahead and to see beyond the parochial conflicts of the day. But the logic is clear and the moral argument is powerful. It can be done, but that is not the end of our journey.

Achieving existential security would give us room to breathe. With humanity's long-term potential secured, we would be past the precipice, free to contemplate the range of futures that lie open before us. And we could take time to reflect upon what we truly desire, upon which of these visions for humanity would be the best realization of our potential. We can call this “the long reflection.” 

We rarely think this way. We focus on the here and now. Even those of us who care deeply about the long-term future need to focus most of our attention on making sure we have a future. But once we achieve existential security, we will have as much time as we need to compare the kinds of futures available to us and judge which is best. 

So far, most work in moral philosophy has focused on negatives, on avoiding wrong actions and bad outcomes. The study of positives is at a much earlier stage of development. During the long reflection, we would need to develop mature theories that allow us to compare the grand accomplishments our descendants might achieve with eons and galaxies as their canvas. While moral philosophy would play a central role, the long reflection would require insights from many disciplines. For it isn't just about determining which futures are best, but which are feasible in the first place — and which strategies are most likely to bring them about. This would require analysis from science, engineering, economics, political theory, and beyond.

Our ultimate aim, of course, is the final step: fully achieving humanity's potential. But this can wait upon step two: the serious reflection about which future is best and how to achieve that future without any fatal missteps. And while it wouldn't hurt to begin such reflection now, it is not the most urgent task. To maximize our chance of success, we need first to get ourselves to safety, to achieve existential security. Only we can make sure we get through this period of danger and give our children the very pages upon which they will author our future.

Q&A

Nathan Labenz (Interviewer): Thank you very much, Toby. Listening to you, I noticed a striking amount of emotional language for a philosopher — maybe that's not uncommon for you, but it's surprising to me. How much of this project is about making an appeal that you think everyone can buy into? Are there any worldviews that you weren't able to find the right appeal for in making this overarching case?

Toby: That's a good question. I was definitely focused in the book on trying to make a case for existential risk that is both compelling and, in retrospect, obvious — the type of thing that people will just assume, after reading the book, that they've always thought. I wanted to do that rather than make a contrarian case (i.e., "You might have thought one thing, but actually something else is the case"). I really tried to get that across. In this talk, I also focused on that aspect of the book: the framings that make it seem obvious and natural, versus those that make it seem kind of geeky, "maths-y," or counterintuitive.

The book itself is more of a middle ground. It includes quite a lot of technical and quantitative information, as well as these powerful framing arguments. I've tried to find a lot of ways in which people from different backgrounds and traditions could all find the central idea compelling. But I didn't look through the list of everything that everyone believes; I restricted myself to the things that I actually find compelling myself and can see where  people are coming from.

Nathan: To give you a challenging example, you sometimes hear from people who say, "Oh, humanity — we've got it so bad. We should just go extinct and let the plants and animals have the earth." Is there an argument in the book that speaks to those people and tries to bring them into the fold?

Toby: There is a little on that — depending on what brought someone to think that.

Nathan: We don't have too much time. One practical question someone asked in the comments is this: "Given this analysis, do you think people should be donating to all long-term-oriented causes, and is that what you personally have come to do?"

Toby: No. I do think that this is very important, and that it is the central issue of our time (and potentially the most cost-effective as well). I think that effective altruism would be much the worse, though, if it specialized in just one area. I think having a breadth of causes that people are interested in, united by their interest in effectiveness, is central to the community's success. 

In my personal case, all of the proceeds from the book — all of the advances and royalties — are going to organizations focused on making sure we have a long-term future. But I donate from my own income to global health organizations. Back when I started Giving What We Can, that's how I [approached the topic of donating]. And I'm happy to stick with that. I think there are really important things going on in global health as well. Also, we want to be careful about not criticizing each other for supporting the second-best thing.

The same is true for the different existential risks themselves. I want to stress that there is huge variability in terms of these risks' probabilities. That is even more true when you break down the individual events within those categories. The risk of stellar explosions, such as supernovae, is, in my view, something like nine orders of magnitude less than the risk of other events, such as engineered pandemics or artificial intelligence. So there's huge variation.

But it is also true that your own personal fit, or the quality of the opportunity that's on the table at the moment, are both multipliers. They could easily be a multiplier of, say, 10 or more, which could actually change [the priority of a cause and justify the decision] to work on something else. And I do think we should ultimately support a portfolio of causes, where the portfolio is balanced toward the causes we think are most important, but also include some of these other causes.

Nathan: I'd like to go a little deeper into this question of probabilities. You talked about [how we can use the fossil record to bound the entire natural risk to about a one-in-10,000 chance per century, the existing anthropogenic risk to be about 30 times larger, and the emerging risk to be about 50 times larger — roughly one in six — over the coming century.]  I have a two-part question.

First, could you briefly explain how you get to the one-in-six estimate? I'd love to hear about that.

Second, on the topic of existential security, one-in-how-many chances would count — how low would that risk have to go in order for that risk to be sustainable for the long term?

Toby: Those are good questions. In terms of how I think about the probabilities: These are not probabilities I expect all readers will come to based on the evidence I present. There are three chapters on the risks themselves and multiple chapters illuminating them from different angles — the ethics, the economics, the history, and so on. But the one-in-six chance is largely due to my beliefs about the risks of engineered pandemics, artificial intelligence, and also unknown things that could happen. That's where a lot of that estimate comes from. I say a bit more about it in the book; it's hard to explain briefly.

On the question of existential security, what you ultimately need to do is to get to a position where the risk per century is going down. So, it gets low and stays low. Suppose it got to one-in-a-million per century and stayed there. You'd get about a million centuries before something goes wrong on average. I think we could do even better than that if we have a declining schedule of risk — if we make it so that each century, we lower the risk by some very small percentage compared to the century before, then we could potentially last until we reach the limits of what's possible (rather than waiting until we screw things up). I really do think we can achieve that.

Nathan: We have a number of great questions coming in on the app, but unfortunately, that's all of the time we have time for. Thank you very much, Toby Ord!

Comments1
Sorted by Click to highlight new comments since:

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think this is a nice poetic summary of a core strand of EA thinking, that is then backed up by longer, more critical work. I think this would be a pretty good thing to include as a summary of some of the motivation for the X-risk/longtermist side of EA.

Curated and popular this week
Relevant opportunities