Hide table of contents

Hi all,

I thought people might be interested in some of the policy work the Global Priorities Project has been looking into. Below I'm cross-posting some notes on one policy idea. I've talked to several people with expertise in biosafety and had positive feedback, and am currently looking into how best to push further on this (it will involve talking to people in the insurance industry).

In general quite a bit of policy is designed by technocrats and already quite effective. Some other areas are governed by public opinion which makes it very hard to have any traction. When we've looked into policy, we've been interested in finding areas which navigate between these extremes -- and also don't sound too outlandish, so that they have some reasonable chance of broad support.

I'd be interested in hearing feedback on this from EAs. Criticisms and suggestions also very much welcome!

---

Requiring liability insurance for dual-use research with potentially catastrophic consequences

These are notes on a policy proposal aimed at reducing catastrophic risk. They cover some of the advantages and disadvantages of the idea at a general level; they do not yet constitute a proposal for a specific version of the policy.

Research produces large benefits. In some cases it may also pose novel risks, for instance work on potential pandemic pathogens. There is widespread agreement that such 'dual use research of concern' poses challenges for regulation.

There is a convincing case that we should avoid research with large risks if we can obtain the benefits just as effectively with safer approaches. However, there do not currently exist natural mechanisms to enforce such decisions. Government analysis of the risk of different branches of research is a possible mechanism, but it must be performed anew for each risk area, and may be open to political distortion and accusations of bias.

We propose that all laboratories performing dual-use research with potentially catastrophic consequences should be required by law to hold insurance against damaging consequences of their research.

This market-based approach would force researcher institutions to internalise some of the externalities and thereby:

  • Encourage university departments and private laboratories to work on safer research, when the benefits are similar;
  • Incentivise the insurance industry to produce accurate assessments of the risks;
  • Incentivise scientists and engineers to and devise effective safety protocols that could be adopted by research institutions to reduce their insurance premiums.

Current safety records do not always reflect an appropriate level of risk tolerance. For example, the economic damage caused by the escape of the foot and mouth virus from a BSL-3 or BSL-4 lab in Britain in 2007 was high (mostly through trade barriers) and could have been much higher (the previous outbreak in 2001 caused £8 billion of damage). If the lab had known they were liable for some of these costs, they might have taken even more stringent safety precautions. In the case of potential pandemic pathogen research, insurers might require it to take place in BSL-4 or to implement other technical safety improvements such as “molecular biocontainment”.

Possible criticisms and responses

  • The potential risks are too large, and nobody would be willing to insure against them.
    • We could avoid this by placing an appropriate limit on the amount of insurance that could be required. If it were a sufficiently large sum (perhaps in the billions of dollars), the effect should be more appropriate risk aversion, even if the tail risk for the insurer were not fully internalised.
  • The risks are too hard to model, and nobody would be willing to insure against them.
    • There are insurance markets for some risks that are arguably harder to model and have equally high potential losses, such as terrorism.
  • We already have demanding safety standards, so this wouldn’t reduce risk.
    • Much of the current safety standards are focused on occupational health and safety of the lab workers rather than the general public.
    • The market-driven approach proposed would focus attention on whichever steps were rationally believed to have the largest effect on reducing risk, and reduce other bureaucratic hurdles.
    • Liability has been useful at improving behaviour in other domains, for example in industrial safety.
  • It is hard to draw a line around the harmful effects of research. Should we punish research which enables others to perform harmful acts?
    • There is a hard question, but we think we would get much of the benefit by using the simple rule that labs are only liable for simple direct consequences of their work. For example, the release – accidental or deliberate – of a pathogenic virus manufactured in that lab.
  • Research has positive externalities, and it is unfair if they have to internalise only the negative ones.
    • This is true, although research receives funding for precisely this reason.
    • If we don’t make an attempt to introduce liability then we are effectively subsidising unsafe research relative to safe research.
  • Why require insurance rather than just impose liability? Shouldn’t this be a decision for the individuals?
    • Some work may be sufficiently risky that the actors cannot afford to self-insure. In such circumstances it makes sense to require insurance (just as we require car insurance for drivers).
    • This will help to ensure that appropriate analysis of the risks is performed.
  • Which research should this apply to? How can we draw a line?
    • The bureaucracy would be too costly to impose this requirement on all research. We should adapt existing guidelines for which areas need extra oversight. In the first place, potential pandemic pathogen research is an obvious area which should be included.

 

10

0
0

Reactions

0
0

More posts like this

Comments19
Sorted by Click to highlight new comments since: Today at 7:55 PM

Excellent idea! The rest of this comment is going to be negative, but my balance of opinion is not reflected in the balance of words.

One potential downside is that dangerous research would move to other countries. However, this effect would be reduced by the dominance of the anglosphere in many areas of research. Additionally, some research with the potential to cause local but not global disasters represents a national but not international externality, in which case other countries are also appropriately incentivised to adopt sensible precautions. So on net this does not seem to be a very big concern.

Another is the lack of any appreciation for public choice in this argument. Yes, I agree this policy would be good if implemented exactly as described. But policies that are actually implemented rarely bare much resemblance to the policies originally advocated by economists. Witness the huge gulf between the sort of financial reform that anyone actually advocates and Dodd-Frank, which as far as I'm aware satisfied basically no-one who was familiar with it. The relevant literature is the public choice literature. So here are some ways this could be misimplimented:

  • Politically unpopular research is crushed by being deemed dangerous. Obvious targets include research into nuclear power, racial differences, or GMOs.
  • Regulatory capture means that incumbants are allowed to get away with under-insuring, while new entrants are stifled. (In much the same way that financial regulation has benefited large banks at the expense of small ones).
  • The regulations are applied by risk-averse regulators who under-approve, resulting in too much deterrent to risky work, like with the FDA.
  • The regulators are not clever enough to work out what is actually risky, in the same way that financial regulators have proved themselves incapable of identifying systematic risks ahead of time, and central banks incapable of spotting asset bubbles. As such, the relationship between the level of liability researchers had to insure against and the true level of liability would be poor.

Why require insurance rather than just impose liability? Shouldn’t this be a decision for the individuals?

Some work may be sufficiently risky that the actors cannot afford to self-insure. In such circumstances it makes sense to require insurance (just as we require car insurance for drivers).

Drivers are generally individuals, whereas research is generally done by institutions. It seems plausible to me that creditworthy institutions/individuals should not have to take out car insurance. If Oxford faced a potential liability in the billions, I'm sure it would insure. I guess the main threat comes from small, limited liability institutions whose only purpose is to do this one kind of research, and are thus unconcerned with the downside. Or large institutions with poor internal governance.

It seems plausible to me that creditworthy institutions/individuals should not have to take out car insurance. If Oxford faced a potential liability in the billions, I'm sure it would insure. I guess the main threat comes from small, limited liability institutions whose only purpose is to do this one kind of research, and are thus unconcerned with the downside. Or large institutions with poor internal governance.

I agree that in general it's fine for creditworthy institutions to self-insure. The issue is that the scale of possible liability is large enough (billions of dollars, perhaps hundreds of billions of dollars) that even institutions which routinely self-insure against all risks as a matter of course may not be creditworthy against the worst outcomes. In some cases they are explicitly or implicitly state-backed, but if nobody in the chain has considered the possible liability you don't get the proper incentive effects. If there were a market so that the risk of the research were priced, I'd expect better governance even at institutions which self-insured.

I agree.

I agree that there are some issues regarding the version of the policy that would actually be implemented. This is a large factor for requiring insurance rather than direct state regulation, and I think this offers a robustness which goes some way towards defusing your concerns.

For example:

Politically unpopular research is crushed by being deemed dangerous. Obvious targets include research into nuclear power, racial differences, or GMOs.

If there's just an insurance requirement, it's hard for extra costs to swell much above the true expected externalities (if it's safe, they should be able to find someone willing to insure it cheaply).

Yup, I agree again. Though there is still the risk that the political system might manufacture externalities to accuse the researchers of.

If Oxford faced a potential liability in the billions, I'm sure it would insure.

The managers of Harvard's endowment circa 2008 would beg to differ, I think. (It lost about $10 billion, nearly a third of its value.)

It seems like for some of these institutions, how long of a view they take is substantially determined by contingent factors like who's the university president at the time.

I'm sorry, I don't quite understand your point. There's a huge difference between investment risk, for which you are paid the equity risk premium, and the sorts of things people insure against.

I worry about the "that will never happen" effect. Mandating that researchers take out the insurance prevents it being dismissed on that front, but how do we make the insurance agencies take it seriously?

It seems all too plausible that the insurer will just say "this will never happen, and if it does it will be unusual enough that we can probably hold the whole thing up in court - just give them a random number". For a big enough risk, if it happens then the insurer might expect to cease to exist in the upheaval, which also doesn't give them much incentive to give a good estimate.

In general, I'm not sure whether insurers are quite robust enough institutions to be likely to have rational decision procedures over risks that are this big and unlikely.

This is why the reinsurance market exists.

I agree that the process isn't going to be perfect. But the relevant question is whether it's sufficiently better than the status quo.

For what it's worth, I think the insurers may be more likely to over-hedge and only offer insurance at unreasonably high prices. That might be less of a problem (or it might make this whole thing politically infeasible).

Really interesting idea.

Two questions:

  • Not knowing anything about the insurance industry, I'm wondering if the market for this type of insurance be big enough in order for insurers to be willing to offer it?
  • It seems to me that this kind of policy would risk decreasing the amount of research done on natural pandemics. If anything, this seems to be the kind of research there should be more rather than less of. It's true that the point of the insurance is to push people to safer ways of doing the same research, but the increased bureaucracy could have institutions shy away from the research entirely. However, maybe this could be counteracted by lobbying for more funding for research on natural pandemics.

It seems to me that this kind of policy would risk decreasing the amount of research done on natural pandemics. If anything, this seems to be the kind of research there should be more rather than less of.

Interesting. I'm not sure whether we should expect it to decrease or increase the safe research done on natural pandemics. I would guess increase it slightly. There is quite a lot of research in this area with essentially no risk. This paper does a good job of explaining alternatives.

Not knowing anything about the insurance industry, I'm wondering if the market for this type of insurance be big enough in order for insurers to be willing to offer it?

Yes, there's a possible issue here. Insurers already have models for the effects of natural pandemics; pricing insurance on the research would need additional models for the chance of accidental release. It might be possible to subsidise this modelling as a public good, if that were required to enable a market.

Companies like Berkshire Hathaway are in generally happy to do one-off policies for strange and unusual risks, so it seems there wouldn't be much trouble getting insurance companies interested in serving this market.

I think this is an excellent idea but one thing I didn't understand: you said "catastrophic" risks and then mentioned foot and mouth disease which doesn't seem very catastrophic to me.

Are you proposing this for what the EA community would call "existential" risks (e.g. unfriendly AI)? Or just things on the order of a few billion dollars of damage?

This is really aimed at things which could cause damages in perhaps the $100 million - $1 trillion range. I think this would have a broadly positive effect on larger risks through two routes:

First, some larger risks come with associated smaller-scale risks, and you'd do similar things to reduce each of them. I think this is the case with the potential pandemic pathogen research. Requiring liability insurance won't get people to fully internalise the externalities associated with the tail risk, but it should make them take substantial steps in the right direction.

Second, a society which takes seriously a wider class risks of unprecedented low-probability high stakes events will probably be better at responding to existential risks as well.

Now that I know it, this seems pretty obvious to me, especially for risky research. I think that the bureaucracy of requiring insurance calculations and paperwork would outweigh the benefits of it in most areas of academic research, but there could definitely be some for which it is considered.

I really liked Larks' comment, but I'd like to add that this also incentivizes research teams to go into secret. Many AI projects (and some biotech) are currently privately funded rather than government funded, and so they could profit by not publicizing their efforts.

This is true, although I think the number of researchers who would be happy to work on something illegally would be quite a lot lower than those happy to work on something legally.

A similar effect I'm more worried about is pushing the research over to less safety-conscious regimes. But I'm not certain about the size of this effect; good regulation in one country is often copied, and this is an area where international agreements might be possible (and international law might provide some support, although it is untested: see pages 113-122 of this report in a geoengineering context).