Toward Managing AI Bio Risk Over 150 scientists commit to ensure AI safety in synthetic biology research.

Published
Reading time
2 min read
Toward Managing AI Bio Risk: Over 150 scientists commit to ensure AI safety in synthetic biology research.

Scientists pledged to control their use of AI to produce potentially hazardous biological materials.

What’s new: More than 150 biologists in Asia, Europe, and North America signed a voluntary commitment to internal and external oversight of machine learning models that can be used to design proteins. 

How it works: The scientists made 10 voluntary commitments regarding synthetic biology research. They promised broadly to avoid research likely to enable harm and to promote research that responds to infectious disease outbreaks or similar emergencies.

  • The signatories committed to evaluating the risks of AI models that generate protein structures based on user-defined characteristics such as shape or length. They also promised to improve methods for evaluating and mitigating risks.
  • They vowed to acquire synthetic DNA — fabricated gene sequences that can instruct cells to produce proteins designed by AI — only from providers that rigorously screen the DNA for potential to create hazardous molecules. They agreed to support development of new screening methods.
  • They promised to disclose potential benefits, risks, and efforts to mitigate the risks of their research. They pledged to review the capabilities of synthetic biology at regular, secure meetings and report unethical or concerning research practices.
  • They also agreed to revise their commitments “as needed.”

Behind the news: The potential role of AI in producing bioweapons is a major focus of research in AI safety. The current pledge arose from a University of Washington meeting on responsible AI and protein design held late last year. The AI Safety Summit, which took place at around the same time, also addressed the topic, and Helena, a think tank devoted to solving global problems, convened a similar meeting in mid-2023. 

Why it matters: DeepMind’s AlphaFold, which finds the structures of proteins, has spawned models that enable users to design proteins with specific properties. Their output could help scientists cure diseases, boost agricultural production, and craft enzymes that aid industrial processes. However, their potential for misuse has led to scrutiny by national and international organizations. The biology community’s commitment to use such models safely may reassure the public and forestall onerous regulations.

We’re thinking: The commitments are long on general principles and relatively short on concrete actions. We’re glad they call for ongoing revision and action, and we hope they lead to the development of effective safeguards.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox