Hide table of contents

A Happier World just published a video on existential risks!

The aim was to make an introductory video to the threats of nuclear war, engineered pandemics and AI.

Would love to hear what you think. Feel free to use it for your EA events!

Thanks to Yusuf and Chelsea for co-hosting and Sarah Emminghaus for helping write the script!

Transcript

Sources are marked with an asterisk. Text might differ slightly in wording from the final video.

Intro

A bear almost caused a nuclear war. 

Will we see an earth destroying asteroid this century?

A lab leak may have resulted in more than 700 000 deaths!

In this video we’ll cover these things and more! Are any of these threats to humanity?

Asteroids, climate change and more

In 1998 two movies were released both exploring a similar topic: Asteroid impacts. Armageddon and Deep Impact both tell a similar story about how humanity tries to prevent an asteroid impact from destroying all life on earth.

In Deep Impact they accidentally split the 11 kilometre wide comet in two instead of destroying it. The smaller half causes deadly tsunamis and the bigger half successfully gets blown up. Sorry for the spoilers.

In Armageddon, Bruce Willis plays an oil driller offering himself up to detonate the Texas-sized asteroid so it doesn’t hit our planet. The asteroid splits in two halves that both nicely avoid earth. Directed by Michael Bay, of course.

“Tonight, the largest rock struck, a fireball rose hundreds of kilometres above the surface creating a hole in the atmosphere larger than earth. Shockwaves will ripple around the planet for days and the surface may be changed for weeks”.

After a comet hit Jupiter in 1994, people started taking risks from near earth objects seriously. In the same decade US Congress passed an act that tasked NASA with finding 90 percent of all near-Earth asteroids and comets larger than a kilometre. This effort thankfully has so far been successful.*

So should we be worried about an asteroid killing all life on earth?

Not really.

Because of NASA’s asteroid-finding project, we can estimate that the odds of an asteroid between 1 and 10 kilometres hitting our planet is around 1 in 120 000 over the next century. These smaller asteroids could be extinction level threats but they likely aren’t. On the other hand: asteroids bigger than 10 kilometres, like the one that wiped out the dinosaurs, the chance is about a 1 in 150 million over the next century.

Asteroid impacts are just one example of natural extinction risks. But what about others? What about the eruption of supervolcanoes like the one under yellowstone park, or the explosion of stars in distant galaxies bursting gamma rays causing the earth’s ozone layer to deplete, exposing us to deadly UV radiation?

Giving an estimation to these is more difficult, as we don’t have as much data to work with. And after researching various natural risks, the Oxford philosopher Toby Ord thinks that all natural risks combined have very roughly a 1 in 10 000 chance of causing an existential catastrophe within the next 100 years. In fact this was just one aspect of a book he wrote on existential risks, called The Precipice. He defines an existential catastrophe as something that wipes out all of humanity, or destroys our longterm potential. For example, by making it impossible to rebuild civilization or by guaranteeing a future dystopia.

So we talked about natural existential risks. But what about unnatural risks? But what about threats caused by humanity? The first thing that comes to people’s mind is climate change. And it’s undeniable that climate change will cause a lot of damage to life on earth. And it already has. Unfortunately we can expect it will continue to do so, especially impacting the world’s poorest people.

But we’re cautiously optimistic that climate change is an issue that will get solved as long as we keep fighting against it. Kurzgesagt made a great video on this, and this channel actually made a response video to that on how you can help fight climate change.

But today we want to talk about threats that usually don’t come to most people’s minds. Threats that are at least as scary as climate change. These threats can pose a huge danger to humanity, but are often far more neglected than climate change is. So let’s also discuss what we can actually do to prevent these threats from happening.

Nuclear War

Did you know a black bear almost caused a nuclear war? Yep, that’s right. It was 1962, it was the height of the Cuban missile crisis. The Soviets had begun to construct nuclear launch sites on Cuba, allowing the USSR to target areas deep inside the American mainland. This made the United States upset, and US aircrafts with nuclear weapons were ready to take off the moment the USSR would make a move. 

One night at a US Air Defence airport, a figure was climbing a fence to enter the terrain which set off the intruder alarms. 

A soldier started shooting at the intruder and notified other locations in the area, thinking the Soviets were making a move against the Air Force assets. But one air base in Wisconsin accidentally set off a much more serious alarm because of a bad wire. This caused the pilots to think a nuclear war was happening. They hurried to their aircraft so they could fly away and use their nukes. Luckily though, once people discovered the intruder was just a black bear, the Wisconsin base commander got one of his officers to drive a truck onto the flight line, right in the takeoff path of the jets. He was able to abort the mission as the planes started their engines.****

We’re lucky nothing happened that night - who knows how many people could have died. 

But this wasn’t the first and only time we have been close to a nuclear war. There have been many close calls, you can read all about them on the Future of Life website.*

Since Russia’s invasion into Ukraine it seems like Nuclear War is closer than it has in the past couple of decades. 

So how bad would it actually be?

A full blown nuclear war could kill hundreds of million of people - although it is at this moment in time unlikely to wipe out humanity. For nuclear weapons to create enough fallout to cause a deadly level of radiation all over the world, ten times as many weapons than we have today would be necessary. 

A nuclear war could still count as an existential risk - since its consequences might lead to some really scary futures. 

Following a nuclear war, firestorms could lead to lots of dust being kicked up into the atmosphere which might block sunlight from reaching the Earth’s surface. This could lead to a temporary but massive drop in temperatures all over the globe. That’s known as a nuclear winter and would have catastrophic consequences: A lot of vegetation and animal life would die. Agriculture, food supply chains, medical and transportation infrastructures would be destroyed. Millions of people could die.* 

In recent years, however, there has been more scepticism about nuclear winter claims, with scientists arguing that a drop in temperature likely wouldn’t happen.* But since we’ve never experienced a nuclear war, we’re uncertain what the effects really would be, which means it could easily be worse than what we’re thinking now.

Toby Ord estimates that the risk of a nuclear war causing an existential catastrophe in the next century is roughly 1 in a 1000. This is the same estimation he gives for climate change. The book, The Precipice, is from 2020. So the Russian invasion in Ukraine could have easily increased the odds of a nuclear war happening. And Toby Ord is just one person. To have a better idea about the risks of nuclear war, we can look at forecasting platforms like Metaculus where hundreds of people try to forecast specific events. Research has shown that on average, crowd forecasts outperform the predictions of any one expert.* Metaculus forecasters predict there’s a 7% chance that a not test related nuclear bomb will be detonated before 2024*, 33% chance that a nuclear bomb will be used in war before 2050* and a 10% of a full blown nuclear war happening before 2070.*

So what’s currently being done to prevent this horrible scenario? 

There are a number of organisations like the Nuclear Threat Initiative* or the Global Catastrophic Risk Institute* that work to reduce this kind of risk - for example by modelling scenarios to quantify the risk of nuclear war and advocating for better policies.

Another way to reduce the existential threat of nuclear war, which could also help in the case of pandemics and asteroids, is to build lots of shelters around the world. Figuring out how to rapidly scale our food production in the case of a nuclear winter also seems like a promising idea.

Pandemics

In 1977, a lab leak likely started a pandemic causing approximately 700 000 deaths! The virus behind it was the H1N1 virus, known as the virus that caused the 1918 influenza pandemic, where it killed more than 17 million people in 2 years. It had gone extinct in humans in 1957, but it mysteriously reappeared. First in China and then the Soviet Union. It became known as the Russian flu. What’s odd is that the strain of the virus looked eerily similar to the strain from 20 years before. Normally a virus would mutate a lot during that time. This is why experts argue that it likely came from either a lab leak or an ill-advised vaccine experiment or trial.**

Lab leaks have happened in the past, a whole list can be found on wikipedia. Many of these have happened in labs with the highest biosecurity levels. The leaks that are confirmed had less severe consequences than the Russian flu, but many of the pathogens were more dangerous, if they had spread, so it could be much worse.**

Biotechnology is improving rapidly. The genetic modification of existing organisms or the creation of new ones is becoming easier and cheaper year by year.

Research on viruses is being done all the time. If not well regulated, researchers might experiment with making viruses more lethal, more spreadable or more resistant to vaccines. Someone might accidentally create or spread a potentially disastrous disease. Leaks aren’t a thing from the past. Even last year, a lab worker in Taiwan was bitten and infected by a mouse infected with the Covid Delta variant.**

If we’re able to easily re-engineer human pathogens, malicious actors, too, can more easily create and spread horrific diseases.

This is why the aforementioned Oxford philosopher Toby Ord estimates there’s roughly a 1 in 30 chance of an engineered pandemic causing an existential catastrophe within the next hundred years.

So what can be done?

One promising path to significantly reduce the risks of pandemics is by setting up an early detection system. Currently, by the time we discover new pathogens, they’ve already spread far and wide and are thus difficult to contain. One way to rapidly detect new viruses would be to frequently test large chunks of the population. But this seems rather invasive, and comes with some privacy concerns. 

A more favourable approach is to check our wastewater for new or scary pathogens, especially in urban areas or near airports. It’s less invasive and can’t be traced back to specific people. This is not a new concept, a few cities have already done this to help track the covid-19 pandemic.* MinuteEarth made a great short video on this idea.*

Other things that could help with reducing the risk of pandemics is to develop better personal protective equipment, like cheaper, comfortable and easier to produce suits, strengthening the biological weapons convention which currently just has 4 people working for it or quicker approval and production of vaccines. After the genetic sequence of the SARS-CoV-2 was published in January, it took Moderna just two days to design their vaccine. But it took another 11 months for the vaccine to be distributed. This is impressive, but if we can make that timeline even shorter we could save even more lives.**

Future me here! Just wanted to add that there’s now a new organisation called the Nucleic Acid Observatory trying out checking wastewater among other things.* Cities like Houston are now also monitoring the spread of monkeypox through their wastewater.* 

Another cool idea to prevent pandemics is to use a specific type of lighting, called far UV-C lighting, inside buildings. In contrast to other kinds of UV lighting, far UV-C seems very unlikely to hurt humans, but it does efficiently kill airborne viruses. Currently far UV-C light bulbs are really expensive, but with some more innovation and funding the costs could dramatically go down and become competitive with regular light bulbs. Other types of UV lighting could still be used too as long as it isn’t shined directly on humans, for example by shining it on the ceiling and letting air pass through it, killing airborne pathogens. This is already widely available and cheaper than far UV-C lighting so could be tested by anyone today! More research on the effectiveness and safety of both far UV-C light and other UV light is needed however.****

If you want to learn more about the future risks of pandemics, we made a whole video on the topic. Check it out!

AI

Have you ever gotten so frustrated by a video game that you wished you could just write a program to win the game for you? Well this is exactly what programmer Tom Murphy did. He trained an AI to score as many points as possible in all kinds of classic NES games. But when the AI tried to play Tetris, it simply paused the game right before it was about to lose. In Tom’s words, “The only winning move is not to play”.**

Similarly, when programmers built an AI for a boat-racing game the AI found that just by spinning circles over some bonus items forever, without actually completing the race, it could keep increasing its point total. Maximising point totals is actually what the programmers programmed the AI to do, but it’s not what they really wanted.***

The problem in these scenarios is not that the AI is incompetent, but it’s that the designers of the system did not properly specify the AI’s goals. For example, with the boat-racing game, the developers wanted a system that was good at racing, but what they got instead was a system that was just good at getting a lot of points.

This is part of a much broader issue within artificial intelligence, the alignment problem: how do we ensure that AI systems actually end up doing what we want them to do?

So far the consequences we’ve talked about are mostly comedic errors. But this issue is not just limited to toys and games. It also happens when the stakes are much, much higher.

Take social media for example: Companies like Facebook and YouTube use AI algorithms to recommend content to you based on what you want to watch. The reason why is so they can maximise ad revenue. The more watch time, the more ad revenue. At first glance this doesn’t seem like it would necessarily be an issue: You get to see more of what interests you, things you want to engage with, creating a more pleasant social media experience. But the goal of the AI is simply to just increase the amount of clicks, watchtime and retention. The algorithms don’t know or care what you’re watching or why. They don’t know or care if you’re being radicalised by political extremism or if you’re watching misinformation. They simply don’t capture the complexity of what humans truly want from social media and the most engaging content isn’t necessarily the best content for you or for society at large.

And it’s making us miserable. Research has shown that since around 2011, when these type of recommender algorithms started heavily being used, self harm and suicide rates with young girls have gone up. People have become more depressed. Romantic interaction has become more difficult. And since fake news spreads much faster than real news, since it’s often more clickbaity, polarisation has skyrocketed. Populists love campaigning on social media, causing extremists parties and figures to gain more votes all around the world.* This was probably never the intention of the developers of the algorithms. But now that clicks on recommended ads create a huge profit incentive for the companies, the situation is difficult to reverse.

Despite all these horrible societal effects though, AI systems could become a lot more advanced in the future and alignment problems could become amplified into new domains. 

Imagine these types of issues happening on a much bigger scale, with AI systems that have control over supply chains, television broadcasts, medical supplies, financial systems, military uses or the whole entire internet. 

Experts think AI will evolve even further than it already has. Currently, AI is already better than us at certain tasks like playing chess or Go. But the software that beats us at chess isn’t better than us at treating cancer. But that could actually change.

This year alone there have already been some very recent developments. In April, the organisation OpenAI presented their image generating AI, Dall-E 2, which can produce images that look a whole lot like actual art by humans. 

Google’s AI research lab DeepMind created an impressive software called GATO, the most general AI software to date. According to DeepMind, it can, “play Atari, caption images, chat, stack blocks with a real robot arm and do much more”. The goal of developing such multi-domain systems is to eventually develop Artificial General Intelligence or AGI, systems that can do anything a human can do.

There have been so many recent breakthroughs in AI, it’s been difficult to keep up for the writers of this script.

But according to many AI researchers, there’s a good chance that in the next few decades AI can become as good or better than humans at performing any task across different domains. It could solve complex problems in a fraction of a second, it could do hundreds of years worth of scientific and technological research in a matter of months. This would likely end up being the most impactful invention in all of human history. 

It could have amazing benefits. It might be able to help us solve climate change, poverty and other issues we desperately need to solve. But we should not expect it to be aligned with our goals by default. And if we do end up creating powerful unaligned AI systems it could go really, really badly.

With current weaker AIs alignment issues that do exist can sometimes be fixed after the fact or we can simply turn the systems off. But as systems become more sophisticated, we might not be able to stop them at all. This could simply be because we depend on them just like we depend on the internet. Once an AI becomes sophisticated enough to have a model of the world and much more advanced capabilities than humans have, we’re unlikely to be able to monitor and respond fast enough to stop the AI from doing irreparable damage. Perhaps we’d think we successfully stopped an AI, while in reality it already made thousands of copies of itself online ready to be reactivated at a moment’s notice.

This is not because the machines would have consciousness or a will to live. It’s simply because any sufficiently intelligent system will take actions like preventing itself from being shut off or seizing more power. Not because it wants those things as end goals, but because those things will always help the AI succeed at its assigned task no matter what that task may be.

In the words of famous AI researcher Stuart Russell: “You can’t fetch the coffee if you’re dead”.

Another important point to clarify is that artificial intelligence doesn’t need to look like a physical robot, like the one in the terminator, to send us down a path of destruction. 

So, a highly advanced AI system that’s given the task of reversing climate change for instance, could end up backfiring by performing experiments that ruin the atmosphere in other ways.

A highly advanced AI given the goal of eliminating cancer might literally satisfy that goal by killing all humans. 

An AI tasked with growing a company, could seize all of earth’s resources to try and maximise profit as much as possible. If you think corporations are a problem now, imagine them being run by superintelligent machines devoid of any human empathy.*

Of course, we could try to anticipate every failure mode and then build safeguards for each. But again, it’s hard to make rules that something much smarter than you can’t work around. If we tell it to not kill humans, it could build a new AI to carry out that task. If we tell it not to build a new AI, it could bribe or blackmail people into doing its dirty work. And so on. 

The fact is that there are many problems: mathematical, computer science and philosophical problems in alignment research that just haven’t been solved. We haven’t figured out how to fully understand an AI system’s reasoning process. We haven’t figured out how to program an AI system so that it wouldn’t prevent itself from being shut off.

Various famous people including Stephen Hawking, Bill Gates and even Alan Turing the inventor of modern computers have expressed worries about misaligned AI.

Toby Ord puts the estimate of an unaligned AI causing an existential catastrophe at around one in ten for the next century - that’s a lot. But many AI experts put the risk much, much higher. Some closer to 50% or more. And when experts disagree about risks, it seems prudent to be cautious. While some think AI is unlikely to be an existential threat, others think it’s almost inevitable. Maybe if we’re lucky we’ll spend time preparing for a disaster that ends up being prevented, like the Y2K bug. Or maybe we’ll find out that there was some previously unknown reason not to worry - but assuming everything will just be fine on the basis of just a small portion of expert opinion is foolish.

So when might this threat actually emerge?

Metaculus forecasters currently estimate that general AI will arrive around 2040*, and a weakly general one might already be here by 2027*. These dates have been shifting a lot lately because of recent developments like the ones by OpenAI and Google’s Deepmind. Over time they’ve been getting eerily closer and closer to today. So we urgently need to figure out a way to align the goals of AIs to our own goals. Currently no proposed solution is viable, much less good enough to rely on.

Luckily there are more and more people working on AI safety, for example at the Center for Human Compatible AI at UC Berkeley, the Centre for the Study of Existential Risk at Cambridge University, among many others. Thankfully OpenAI and Google’s DeepMind actually have their own AI safety teams. Unfortunately other AI research groups, including IBM and Meta aka Facebook, do not have large AI safety teams. But in any case, we need a lot more AI safety researchers - there are currently less than a thousand people working on this problem. A problem that could be one of humanity's greatest challenges.**

This might all sound like crazy talk to you. And to be honest, I don’t blame anyone for having a healthy dose of scepticism. A short video like this could never do the complexity of this subject justice, especially since there are so many different opinions on it by a lot of different experts. So we would suggest you read more into the topic. The book “The Alignment Problem” by Brian Christian is a great starting point, or “Human Compatible” by Stuart Russell. I would also recommend the YouTube channel Rob Miles if you’re interested in more rigorous versions of the arguments we presented. More resources will be in the comments and in the description.* And of course in the future this channel and also my personal channel, Leveller, will likely make more videos on this topic.

Great power war

At the time of writing this video, the prediction platform Metaculus puts the likelihood of a third world war before 2050 at 25%* - that number has increased dramatically since the invasion of Ukraine. 

A Great Power Conflict, meaning a violent conflict between major powers like the US, China or Russia would be devastating and could downstream increase the probability of other existential risks.

The researcher Stephen Clare describes a Great power conflict as a risk factor for existential risks - because it “plausibly increases the chance that a host of other bad outcomes come to pass”.*

This could happen for example because worldwide cooperation is significantly damaged, because nuclear weapons could be used, because new bio weapons would be developed or because an AI arms race might lead to a heightened risk of misaligned AI takeover.*

So it’s in the interest of the whole world to decrease the odds of a Great Power War.

Conclusion/outro

When combining the threats from climate change, nuclear war, pandemics, AI, great power war and more together, Toby Ord estimates a 1 in 6 chance of an existential catastrophe happening in the next century. That sounds scary. Even if you think that’s way off, even if you think it’s just 1 in a 100, that’s still way too high. Would you get on a plane with a 1 in a 100 odds of crashing? I sure wouldn’t.

For all the risks we’ve discussed this episode, there are a few things you can do to help. You can become politically active and advocate for sensible long-term policies. You can work for an organisation trying to reduce existential risks, like some of the ones we mentioned during this video. For open positions check out the 80.000 hours job board! They list jobs at various organisations where you can have a big impact.

And for even more ideas, check out the description of this video!

I just want to give a big thanks to Yusuf and Chelsea for co-hosting this video. Check out Yusuf’s YouTube channel! I really liked his video on Doing Good Better, another book by Will MacAskill that shaped my thinking on charitable giving. Also check out Chelsea Parks’ TikTok or Instagram account, hooplahoma, where she showcases some incredible circus moves!

In the next video we’ll explore what would happen if 99% of the population vanished. So be sure to subscribe and ring that notification bell to stay in the loop!

We’ve tried our best to explain the topics in this video as accurately as possible. But since we’re human, there’s a good chance we’ve made mistakes. If you noticed a mistake or disagree with something, let us know in the comments down below. Thanks for watching!


 

Comments1
Sorted by Click to highlight new comments since:

Added a transcript to this post! Will do so for my other videos as well.

Curated and popular this week
Relevant opportunities