The main goal of my work these days is trying to reduce the chances of individuals or small groups causing large-scale harm through engineered pandemics, potentially civilizational collapse or extinction. One question in figuring out whether this is worth working on, or funding, is: how large is the risk?

One estimation approach would be to look at historical attacks, but while they've been terrible they haven't actually killed very many people. The deadliest one was the September 11 attacks, at ~3k deaths. This is much smaller scale than the most severe instances of other disasters like dam failure, 25k-250k dead after 1975's Typhoon Nina, or pandemics, 75M-200M dead in the Black Death. If you tighten your reference class even further to include only historical biological attacks by individuals or small groups, the one with the most deaths is just five, in the 2001 anthrax attacks.

Put that way, I'm making a pretty strong claim: while the deadliest small-group bio attack ever only killed five people, we're on track for a future where one could kill everyone. Why do I think the future might be so unlike the past?

Short version: I expect a technological change which expands which actors would try to cause harm.

The technological change is the continuing decrease in the knowledge, talent, motivation, and resources necessary to create a globally catastrophic pandemic. Consider someone asking the open source de-censored equivalent of GPT-6 how to create a humanity-ending pandemic. I expect it would read virology papers, figure out what sort of engineered pathogen might be appropriate, walk you through all the steps in duping multiple biology-as-a-service organizations into creating it for you, and give you advice on how to release it for maximum harm. And even without LLMs, the number of graduate students who would be capable of doing this has been increasing quickly as technological progress and biological infrastructure decrease the difficulty.

The other component is a shift in which actors we're talking about. Instead of terrorists, using terror as a political tool, consider people who believe the planet would be better off without humans. This isn't a common belief, but it's also not that rare. Consider someone who cares deeply about animals, ecosystems, and the natural world, or is primarily focused on averting suffering: they could believe that while the deaths of all living people would be massively tragic, it would still give us a much better world on balance. Note that they probably wouldn't be interested in smaller-scale attacks: if it doesn't have a decent chance of wiping out humanity then they'd just be causing suffering and chaos without making progress towards their goals; they're not movie villains! Once a sufficiently motivated person or small group could potentially kill everyone, we have a new kind of risk from people who would have seen smaller-scale death as negative.

Now, these people are not common. There's a trope where, for example, opponents of environmentalism claim that human extinction is the goal, even when most radical environmentalists would see human extinction as a disaster. But what makes me seriously concerned is that as the bar for causing extinction continues to lower, the chances that someone with these views does have the motivation and drive to succeed gets dangerously high. And since these views are disproportionately common among serious engineering-minded folks, willing to trust the moral math, I think some will be the kind of highly capable and careful people who could work in secret for years sustained by a clear conviction that they were doing the right thing.

Fortunately, I think this is a risk we can seriously lower. For example, we should:

  • Require biology-as-a-service companies to screen for pathogens and apply anti-money laundering-style customer screening ("KYC").

  • Ensure LLMs do not help people kill everyone.

  • Verify companies releasing open source LLMs have built them in a way where their safeguards can't be trivially removed.

  • Detect stealth pathogens, in a way that would give us warning while there is still time to do something about it (what I'm working on).

  • Develop much better and cheaper PPE so that once we detect pandemic we can keep the core functions of society functioning.

  • Improve our ability to evaluate new vaccines and other medicines much more quickly so we could potentially roll out a countermeasure in time to stop an in-progress pandemic.

If you want to read more in this direction I'd recommend. Kevin Esvelt's 80,000 Hours podcast appearance (transcript) and his Delay, Detect, Defend paper.

70

3
2

Reactions

3
2

More posts like this

Comments5
Sorted by Click to highlight new comments since: Today at 12:24 AM

Thanks for writing this, and mentioning my related post, Jeff!

The technological change is the continuing decrease in the knowledge, talent, motivation, and resources necessary to create a globally catastrophic pandemic.

I think this depends on how fast safety measures like the ones you mentioned are adopted, and how the offense-defence balance evolves with technological progress. It would be great if Open Phil released the results of their efforts to quantify biorisk, whose one of the aims was:

  • Enumerating possible ‘phase transitions’ that would cause a radical departure from relevant historical base rates, e.g. total collapse of the taboo on biological weapons, such that they become a normal part of military doctrine.

Update on December 3. There plans plans to publish the results:

I worked on a project for Open Phil quantifying the likely number of terrorist groups pursuing bioweapons over the next 30 years, but didn't look specifically at attack magnitudes (I appreciate the push to get a public-facing version of the report published - I'm on it!).

One interesting implication of this theory is that the spread of strict utilitarian philosophies would be a contributing factor to existential risk. The more people are willing to bite utilitarian bullets, the more likely it is that one will bite the "kill everyone" bullet. 

This would make the EA movement potentially existentially dangerous. Even if we don't agree with the human extinction radicals, people might split off from the movement and end up supporting it. One interpretation of the FTX affair was that it was a case of seemingly EA aligned people splitting off to do unethical things justified by utilitarian math.  

One interesting implication of this theory is that the spread of strict utilitarian philosophies would be a contributing factor to existential risk. The more people are willing to bite utilitarian bullets, the more likely it is that one will bite the "kill everyone" bullet.

Can you go into more detail about this? Utilitarians and other people with logically/intellectually precise worldviews seem to be pretty consistently against human extinction; whereas average people with foggy worldviews tend to randomly flip in various directions depending on what hot takes they've recently read.

Even if we don't agree with the human extinction radicals, people might split off from the movement and end up supporting it.

Most human extinction radicals seem to emerge completely seperate from the EA movement and never intersect with it, e.g. AI scientists who believe in human extinction. If people like Tomasik or hÉigeartaigh ever end up pro-extinction, it's probably because they recently did a calculation that flipped them to prioritize s-risk over x-risk, but sign uncertainty and error bars remain more than sufficiently wide to keep them in their network with their EV-focused friends (at minimum, due to the obvious possibility of doing another calculation that flips them right back).

One interpretation of the FTX affair was that it was a case of seemingly EA aligned people splitting off to do unethical things justified by utilitarian math.

Wasn't the default explanation that SBF/FTX had a purity spiral with no checks and balances, and combined with the high uncertainty of crypto trading, SBF became psychologically predisposed to betting all of EA on his career instead of betting his career on all of EA? Powerful people tend to become power seeking and that's a pretty solid prior in most cases.

Can you go into more detail about this? Utilitarians and other people with logically/intellectually precise worldviews seem to be pretty consistently against human extinction; whereas average people with foggy worldviews tend to randomly flip in various directions depending on what hot takes they've recently read.

Foggy worldviews tend to flip people around based on raw emotions, tribalism, nationalism, etc. None of these are likely to get you to the position "I should implement a long term machievellian scheme to kill every human being on the planet". The obvious point being that "every human on the planet" includes ones family, friends, and country, so almost anyone operating on emotions will not pursue such a goal. 

On the other hand, utilitarian math can get to "kill all humans" in several ways, just by messing around with different assumptions and factual beliefs. Of course, I don't agree with those calculations, but someone else might. If we convince everyone on earth that the correct thing to do is "follow the math", or "shut up and calculate", then some subset of them will have the wrong assumptions, or incorrect beliefs, or just mess up the math, and conclude that they have a moral obligation to kill everyone. 

Upvoted. I'm really glad that people like you are thinking about this.

Something that people often miss with bioattacks is the economic dimension. After the 2008 financial crisis, economic failure/collapse became perhaps the #1 goalpost of the US-China conflict

It's even debatable whether the 2008 financial crisis was the cause of the entire US China conflict (e.g. lots of people in DC and Beijing would put the odds at >60% that >50% of the current US-China conflict was caused by the 2008 recession alone, in contrast to other variables like the emergence of unpredictable changes in cybersecurity).

Unlike conventional war e.g. over Taiwan and cyberattacks, economic downturns have massive and clear effects on the balance of power between the US and China, with very little risk of a pyrrhic victory (I don't currently know how this compares to things like cognitive warfare which also yield high-stakes victories and defeats that are hard to distinguish from natural causes).

Notably, the imperative to cause massive economic damage, rather than destroy the country itself, allows attackers to ratchet down the lethality as far as they want, so long as it's enough to cause lockdowns which cause economic damage (maybe mass IQ reduction or other brain effects could achieve this instead). 

GOF research is filled with people who spent >5 years deeply immersed in a medical perspective e.g. virology, so it seems fairly likely to me that GOF researchers will think about the wider variety of capabilities of bioattacks, rather than inflexibly sticking to the bodycount-maximizing mindset of the Cold War.

I think that due to disorganization and compartmentalization within intelligence agencies, as well as unclear patterns of emergence and decay of competent groups of competent people, it's actually more likely that easier-access biological attacks would first be caused by radicals with privileged access within state agencies or state-adjacent organizations (like Booz Allen Hamilton, or the Internet Research Agency which was accused of interfering with the 2016 election on behalf of the Russian government). 

These radicals might incorrectly (or even correctly) predict that their country is a sinking ship and that they only way out is to personally change the balance of power; theoretically, they could even correctly predict that they are the only ones left competent enough to do this before it's too late.

Curated and popular this week
Relevant opportunities