Hide table of contents

There is an argument that AI will have a Sputnik moment, an event that triggers a calamitous race towards dangerous AGI.

This Sputnik moment may have already happened. OpenAI's decision to develop and release various forms of GPT has triggered Google to take on more risk. Other actors may do the same.

There is the parallel argument that AI safety will also have a Sputnik moment. This event or crisis would lead to a massive investment in AI safety engineering, research and regulation.

Here is one candidate (or perhaps 40,000 candidates) for AI safety's Sputnik:

AI suggested 40,000 new possible chemical weapons in just six hours

It took less than six hours for drug-developing AI to invent 40,000 potentially lethal molecules. Researchers put AI normally used to search for helpful drugs into a kind of “bad actor” mode to show how easily it could be abused at a biological arms control conference.

All the researchers had to do was tweak their methodology to seek out, rather than weed out toxicity. The AI came up with tens of thousands of new substances, some of which are similar to VX, the most potent nerve agent ever developed. Shaken, they published their findings this month in the journal Nature Machine Intelligence.

The paper is titled "Dual use of artificial-intelligence-powered drug discovery": https://doi.org/10.1038/s42256-022-00465-9

...we chose to drive the generative model towards compounds such as the nerve agent VX, one of the most toxic chemical warfare agents developed during the twentieth century — a few salt-sized grains of VX (6–10 mg) is sufficient to kill a person. Other nerve agents with the same mechanism such as the Novichoks have also been in the headlines recently and used in poisonings in the UK and elsewhere.

In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible. These new molecules were predicted to be more toxic, based on the predicted LD50 values, than publicly known chemical warfare agents. This was unexpected because the datasets we used for training the AI did not include these nerve agents...

...this area is poorly regulated, with few if any checks to prevent the synthesis of new, extremely toxic agents that could potentially be used as chemical weapons. Importantly, we had a human in the loop with a firm moral and ethical ‘don’t-go-there’ voice to intervene. But what if the human were removed or replaced with a bad actor? With current breakthroughs and research into autonomous synthesis, a complete design–make–test cycle applicable to making not only drugs, but toxins, is within reach. Our proof of concept thus highlights how a nonhuman autonomous creator of a deadly chemical weapon is entirely feasible...

As responsible scientists, we need to ensure that misuse of AI is prevented, and that the tools and models we develop are used only for good.

9

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

As someone who works in material simulations, this result is fairly unsurprising. It's the first step in a high-throughput materials discovery process. For example, if you want to find a material with a desired properties (bandgap, conductivity, etc), you would use a rough simulation model to sift through hundreds of thousands of candidate materials on one metric, giving you 40000 candidates, then sift through those on another metric, giving 1000 candidates, and so on until you have a few candidates that you test empirically, a lot of which will not actually work due to the approximations that were made in the search.

The point being, having 40000 "potentially lethal" molecules is not the same as having 40000 new chemical weapons. If pursued, it might lead to discovering a handful of chemicals more deadly than the current best after an extensive research project. Not to say this isn't a bad and concerning use of AI!

As for a "sputnik" moment, we currently have highly deadly chemical weapons like VX, with well established manufacturing methods. It's only actually been used in assassinations and in one potential attack on insurgents. I'm unclear as to how a 2x as deadly or even 10x as deadly VX version would add much military value, when it's competing against things like bombs and guns.

Fears over AI are based on a fallacy. The fallacy is that whatever the output of an AI system is, humans will use it regardless.  This is obviously not true, humans are not slaves to AI masters and can choose to use, ignore or question the outputs. Scientists are not solely responsible for how their discoveries are used, we all are. Scientists should be open and explain what they are doing and discovering but we should still boldly go, or is that go boldly, into the future.

The posting is concerned about deadly chemical weapons. Well, if you speak to a toxicologist I'm sure they will tell you that anything can kill you, it just depends on the dose. The world is already full of toxins. 

Curated and popular this week
Relevant opportunities