Crossposted from my blog.

[Epistemic status: Quick discussion of a seemingly useful concept from a field I as yet know little about.]

I've recently started reading around the biosecurity literature, and one concept that seems to come up fairly frequently is the Web of Prevention (also variously called the Web of Deterrence, the Web of Protection, the Web of Reassurance...[1]). Basically, this is the idea that the distributed, ever-changing, and dual-use nature of potential biosecurity threats means that we can't rely on any single strategy (e.g. traditional arms control) to prevent them. Instead, we must rely on a network of different approaches, each somewhat failure-prone, that together can provide robust protection.

For example, the original formulation of the "web of deterrence" identified the key elements of such a web as

comprehensive, verifiable and global chemical and biological arms control; broad export monitoring and controls; effective defensive and protective measures; and a range of determined and effective national and international responses to the acquisition and/or use of chemical and biological weapons[2].

This later got expanded into a broader "web of protection" concept that included laboratory biosafety and biosecurity; biosecurity education and codes of conduct; and oversight of the life sciences. I'd probably break up the space of strategies somewhat differently, but I think the basic idea is clear enough.

The key concept here is that, though each component of the Web is a serious part of your security strategy, you don't expect any one to be fully protective or rely on it too heavily. Rather than a simple radial web, a better metaphor might be a multilayered meshwork of protective layers, each of which catches some potential threats while inevitably letting some slip through. No layer is perfect, but enough layers stacked on top of one another can together prove highly effective at blocking attacks[3].

This makes sense. Short of a totally repressive surveillance state, it seems infeasible to eliminate all dangerous technologies, all bad actors, or all opportunities to do harm. But if we make means, motive and opportunity each rare enough, we can prevent their confluence and so prevent catastrophe.

Such is the Web of Prevention. In some ways it's a very obvious idea: don't put all your eggs in one basket, don't get tunnel vision, cover all the biosecurity bases. But there's a few reasons I think it's a useful concept to have explicitly in mind.

Firstly, I think the concept of the Web of Prevention is important because multilayer protective strategies like this are often quite illegible. One can easily focus too much on one strand of the web / one layer of the armour, and conclude that it's far too weak to achieve effective protection. But if that layer is part of a system of layers, each of which catches some decent proportion of potential threats, we may be safer than we'd realise if we only focused on one layer at a time.

Secondly, this idea helps explain why so much important biosecurity work consists of dull, incremental improvements. Moderately improving biosafety or biosecurity at an important institution, or tweaking your biocontainment unit protocols to better handle an emergency, or changing policy to make it easier to test out new therapies during an outbreak...none of these is likely to single-handedly make the difference between safety and catastrophe, but each can contribute to strengthening one layer of the system.

Thirdly, and more speculatively, the presence of a web of interlocking protective strategies might mean we don't always have to make each layer of protection maximally strong to keep ourselves safe. If you go overboard on surveillance of the life sciences, you'll alienate researchers and shut down a lot of highly valuable research. If you insist on BSL-4 conditions for any infectious pathogens, you'll burn a huge amount of resources (and goodwill, and researcher time) for not all that much benefit. And so on. Better to set the strength of each layer at a judicious level[4], and rely on the interlocking web of other measures to make up for any shortfall.

Of course, none of this is to say that we're actually well-prepared and can stop worrying. Not all strands of the web are equally important, and some may have obvious catastrophic flaws. And a web of prevention optimised for preventing traditional bioattacks may not be well-suited to coping with the biosecurity dangers posed by emerging technologies. Perhaps most importantly, a long-termist outlook may substantially change the Web's ideal composition and strength. But in the end, I do think I expect something like the Web, and not a single ironclad mechanism, to be what protects us.


  1. Rappert, Brian, and Caitriona McLeish, eds. (2007) A web of prevention: biological weapons, life sciences and the governance of research. Link here. ↩︎

  2. Rappert & McLeish, p. 3 ↩︎

  3. To some extent, this metaphor depends on the layers in the armour being somewhat independent of each other, such that holes in one are unlikely to correspond to holes in another. Even better would be an arrangement such that the gaps in each layer are anticorrelated with those in the next layer. If weaknesses in one layer are correlated with weaknesses in the next, though, there's a much higher chance of an attack slipping through all of them. I don't know to what extent this is a useful insight in biosecurity. ↩︎

  4. Of course, in many cases the judicious level might be "extremely strong". We don't want to be relaxed about state bioweapons programs. And we especially don't want those responsible for safety at each layer to slack off because the other layers have it covered: whatever level of stringency each layer is set to, it's important to make sure that level of stringency actually applies. But still, if something isn't your sole line of defence, you can sometimes afford to weaken it slightly in exchange for other benefits. ↩︎

28

0
0

Reactions

0
0

More posts like this

Comments7
Sorted by Click to highlight new comments since:

A related notion from computer security, defense in depth.

Another related concept I just stumbled upon is the "Swiss cheese model of accident causation". According to Wikipedia, this is:

a model used in risk analysis and risk management, including aviation safety, engineering, healthcare, emergency service organizations, and as the principle behind layered security, as used in computer security and defense in depth. It likens human systems to multiple slices of swiss cheese, stacked side by side, in which the risk of a threat becoming a reality is mitigated by the differing layers and types of defenses which are "layered" behind each other. Therefore, in theory, lapses and weaknesses in one defense do not allow a risk to materialize, since other defenses also exist, to prevent a single point of failure.

[...] The Swiss cheese model of accident causation illustrates that, although many layers of defense lie between hazards and accidents, there are flaws in each layer that, if aligned, can allow the accident to occur.

This is referenced in a viral image about preventing covid-19 infections. 

Also related is the recent (very interesting) paper using that same term (linkpost).

(Interestingly, I don't recall the paper mentioning getting the term from computer security, and, skimming it again now, I indeed can't see them mention that. In fact, they only seem to say "defence in depth" once in the paper.

I wonder if they got the term from computer security and forgot they'd done so, if they got it from computer security but thought it wasn't worth mentioning, or if the term has now become fairly common outside of computer security, but with the same basic meaning, rather than the somewhat different military meaning. Not really an important question, though.)

I've noticed something similar around "security mindset": Eliezer and MIRI have used the phrase to talk about a specific version of it in relation to AI safety, but the term, as far as I know, originates with Bruce Schneier and computer security, although I can't recall MIRI publications mentioning that much, possibly because they didn't even realize that's where the term came from. Hard to know, a probably not very relevant other than to weirdos like us. ;-)

The initial post by Eliezer on security mindset explicitly cites Bruce Schneier as the source of the term, and quotes extensively from this piece by Schneier.

Another good idea from the biosecurity literature is "distancing": that any bio threat increases the tendency of people tp distant from each other via quarantine, masks, less travel, and thus R0 will decline, hopely below 1.

Curated and popular this week
Relevant opportunities