The COVID-19 pandemic likely was due to a lab leak in Wuhan. Currently, it's still up for public debate but likely the topic will be closed when the US intelligence community reports on their attempts to gather information of what happened at the Wuhan Institute of Virology between September and December of 2019 and suspicious activities in it around that time.

However, even in the remote change that this particular pandemic didn't happen as a downstream consequence of gain-of-function research, we had good reason to believe that the reason was extremely dangerous.

Marc Lipsitch, who should be on the radar of the EA community given that he spoke at EA Boston in 2017, wrote in 2014 about the risks of gain of function research:

A simulation model of an accidental infection of a laboratory worker with a transmissible influenza virus strain estimated about a 10 to 20% risk that such an infection would escape control and spread widely (7). Alternative estimates from simple models range from about 5% to 60%. Multiplying the probability of an accidental laboratory-acquired infection per lab-year (0.2%) or full-time worker-year (1%) by the probability that the infection leads to global spread (5% to 60%) provides an estimate that work with a novel, transmissible form of influenza virus carries a risk of between 0.01% and 0.1% per laboratory-year of creating a pandemic, using the select agent data, or between 0.05% and 0.6% per full-time worker-year using the NIAID data.

If we make conservative assumption about with 20 full-time people working on gain of function research and take his lower bound of risk that gives us 1% chance of gain of function research causing a pandemic per year. These are conservative numbers and there's a good chance that the real chances are higher then that.

When in 2017 the US moratorium on gain of functions was lifted the big EA organization that promise their donors to act against Global Catastrophic Risk were asleep and didn't react. That's despite those organizations counting pandemics as a possible Global Catastrophic Risk.

Looking back it seems like this was easy mode, given that a person in the EA community had done the math. Why didn't the big EA organizations listen more?

Given that the got this so wrong, why should we believe that there other analysis of Global Catastrophic Risk isn't also extremely flawed?

5

0
0

Reactions

0
0
Comments8
Sorted by Click to highlight new comments since: Today at 7:58 AM

(Going entirely from Twitter etc and not having read the original papers or grant proposals myself) 

I don't think what the WIV did was central to "gain-of-function" research, at least according to Marc Lipsitch. My understanding is that Shi Zhengli (Obviously not an unbiased source) from WIV claims that their work isn't gain-of-function because they were studying intermediate hosts, rather than deliberately trying to make pathogens more virulent or transmissible.* 

My own opinion is that GoF has become ill-defined and quite political, especially these days, so we have to be really careful about precisely what we mean when we say "GoF"

I realize that this sound like splitting hairs, but the definitional limits are important, because Lipsitch's 2014 paper(s) about the dangers of GoF were predicated on a narrow definition/limits of GoF (the clearest-cut cases/worst offenders), while the claims about lab escape, if true, comes from a broader model of GoF. 

(Two caveats

1) I want to be clear that I personally think that whether it's called GoF or not, studying transmission from intermediate hosts is likely a bad idea at current levels of lab safety.
2) I don't feel particularly qualified to judge this ). 

*I wanted to find the source but couldn't after 3 minutes of digging. Sorry.

Is the claim here that EA orgs focusing on GCRs didn't think GoF research was a serious problem and consequently didn't do enough to prevent it, even though they easily could have if they had just tried harder?
 

My impression is that many organisations and individual EAs were both conerned about risks due to GoF research, and were working on trying to prevent it. A postmortem about strategies used seems plausibly useful, as does a retrospective on whether it should have been an even bigger focus, but the claim as stated above I think is false, and probably unhelpful.
 

Why didn't the big EA organizations listen more?

 

I realise the article excerpt you showed is not an accurate estimation. Marc and Thomas also say:

The record of laboratory incidents and accidental infections in biosafety level 3 (BSL3) laboratories provides a starting point for quantifying risk. Concentrating on the generation of transmissible variants of avian influenza, we provide an illustrative calculation of the sort that would be performed in greater detail in a fuller risk analysis. Previous publications have suggested similar approaches to this problem

...

These numbers should be discussed, challenged, and modified to fit the particularities of specific types of PPP experiments.

So it looks like the calculation above was just an illustrative examples, and EA did not have sufficient data to come to conclusions. Is there any other part of the article that leads you believe the authors had strong faith in their numbers?

 

Given that the got this so wrong, why should we believe that there other analysis of Global Catastrophic Risk isn't also extremely flawed?

What did EA get wrong exactly? I guess they made rational decisions in a situation of extreme uncertainty.

Statistical estimation with little historical data is likely to be inaccurate. A virus leak did not turn into a pandemic before.

Furthermore, many accurate estimations are likely to lead to a bad outcome sometimes. If you throw 100 dice enough times, you will get all 1s eventually.

So it looks like the calculation above was just an illustrative examples, and EA did not have sufficient data to come to conclusions. Is there any other part of the article that leads you believe the authors had strong faith in their numbers?

Generally, if you don't have strong faith in the numbers the way to deal with it is to study it more. I was under the impressiong that understanding global catastrophic risk point of why we have organizations like FLI.

Even if they didn't accept the numbers the task for an organization like FLI would be to make their own estimate.

To go a bit into history, the reason the moratorium existed in the first place was that within the span of a few weeks 75 US scientists at the CDC got infected with anthrax and FDA employees found 16 forgotten vials of smallpox in storage and this was necessary to weaken the opposition to the moratorium in 2014 to get it passed. 

When the evidence for harm is so strong that it forces the hand of politicians, it seems to me reasonable expectation that organizations who's mission is it to think about global catastrophic risk analyse the harm and have a public position on what they think the risk happens to be. If that's not what organization like the FLI are for, what's are they for?

If that's not what organization like the FLI are for, what's are they for?

 

They do their best to gather data, predict events on the base of the data and give recommendations. However data is not perfect, models are not a perfect representation of reality, and recommendations are not necessarily unanimous. To err is human, and mistakes are possible, especially when the foundation of the applied processes contain errors.

Sometimes people just do not have enough information, and certainly nobody can gather information if data does not exist. Still a decision needs to be taken, at least between action vs inaction, and a data-supported expert guess is better than a random guess.

Given a choice, would you prefer nobody carried out the analysis with no possibility of improvement?  or would you still let the experts do their job with a reasonable expectation that most of the times, the problems are solved and human conditions improve?

What if their decision had only 10% change of being better than a decision taken without carrying out any analysis? Would you seek expert advice to improve the odd of success, if that was your only option?

the big EA organization that promise their donors to act against Global Catastrophic Risk were asleep and didn't react. [...]

Given that the got this so wrong, why should we believe that there other analysis of Global Catastrophic Risk isn't also extremely flawed?

Although I can't say I follow GCR's work, I'm unclear on whether/to what extent GCR or EAs in general necessarily "got this so wrong" or "were asleep and didn't react." I recognize it isn't always easy to demonstrate an absence/negative, but I really think you ought to provide some form of evidence or explanation for that claim--along with a reasonable standard for success vs. failure. To me, your post essentially can be summarized as "COVID-19 was due to lab leak; GCR/EAs are supposed to try to prevent pandemics, but a pandemic happened anyway, so how can we trust GCR/EAs?" Among other issues, this line of reasoning applies an excessively high standard for judging the efforts of GCR/EA: it's akin to saying "Doctors are supposed to help people, but some people in this hospital ended up dying anyway, so how can we trust doctors?"

DC
3y3
0
0

When in 2017 the US moratorium on gain of functions was lifted the big EA organization that promise their donors to act against Global Catastrophic Risk were asleep and didn't react. That's despite those organizations counting pandemics as a possible Global Catastrophic Risk.

Looking back it seems like this was easy mode, given that a person in the EA community had done the math. Why didn't the big EA organizations listen more?

Can you describe in greater detail what the world looks like in which big EA organizations did do the thing you wish they had done? And what features our current world diverges from this one on, specifically? What is your anticipation of experience?

The West got China to stop cloning humans by saying that it puts the Chinese scientific community in a negative light and the Chinese care about 'face'. 

With more attention on the issue we likely would have found that the NIH continued funding gain of function research of Baric and Shi in violation of the moratorium and that would have been the basis to make it a public scandal. 

By associating it with being scandalous the Chinese government might have wanted to distance themselves from gain of function research and stopped it from happening in the Wuhan lab.  The Chinese government cares about it's scientific community being respected and of the scientific community helping with economic growth and the gain of function research did neither, so making it clear that it leads to disrespect might have been enough to prevent this pandemic.

It's worth that this is the second pandemic caused within 50 years through a Chinese lab leak and the second lab leak in 2019 in China that infected more then 1000 people.

1% per year seems to be on the conservative side. 

Curated and popular this week
Relevant opportunities