Foundations of Evil The Fear of Weapons of Mass Destruction Designed by AI

Reading time
1 min read
Dozens of toxic barrels on a road and a witch behind

A growing number of AI models can be put to purposes their designers didn’t envision. Does that include heinous deeds?

The fear: Foundation models have proven to be adept at deciphering human language. They’ve also proven their worth in deciphering the structural languages of biology and chemistry. It’s only a matter of time before someone uses them to produce weapons of mass destruction.

Horror stories: Researchers demonstrated how an existing AI system can be used to make chemical weapons.

  • In March, researchers from Collaborations Pharmaceuticals fine-tuned a drug-discovery model on a dataset of toxic molecules.
  • The original model ranked pharmaceutical candidates for predicted toxicity to humans. They reversed the ranking to prioritize the deadliest chemical agents.
  • In six hours, the model designed 40,000 toxins including known chemical weapons that were not in its training set.
  • The researchers believe that their process would be easy to replicate using open-source models and toxicity data.

Gas masks: In an interview, one of the researchers suggested that developers of general-purpose models, such as the one they used to generate toxic chemicals, should restrict access. He added that the machine learning community should institute standards for instruction in chemistry that inform budding scientists about the dangers of misusing research.

Facing the fear: It’s hard to avoid the conclusion that the safest course is to rigorously evaluate the potential for harm of all new models and restrict those that are deemed dangerous. Such a program is likely to meet with resistance from scientists who value free inquiry and businesspeople who value free enterprise, and it might have limited impact on new threats that weren’t identified when the model was created. Europe is taking a first step with its regulation of so-called general-purpose AI. However, without a broad international agreement on definitions of dangerous technology and how it should be controlled, people in other parts of the world will be free to ignore them. Considering the challenges, perhaps the best we can do is to work proactively and continually to identify potential misuses and ways to thwart them.


Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox