New Clarity on Rules for Medical AI FDA Guidance on AI Medical Devices for 2022

Published
Reading time
2 min read
Doctor checking a monitor inside an OR

The United States paved the way to regulate AI systems in healthcare.

What's new: The U.S. Food and Drug Administration (FDA) interpreted existing rules that govern health-related software to include some machine learning algorithms.

What they said: The FDA requires that automated decision-making software meet the same standards as medical devices. The new guidance clarifies which AI systems fall under this designation. Manufacturers of medical devices must submit technical and performance data that demonstrate safety and effectiveness. Makers of medical devices that critically support or pose a potential risk to human life must submit laboratory and clinical trial results and gain explicit approval.

  • Systems to be regulated as medical devices include those used for time-sensitive decision-making, intended to replace a healthcare provider’s judgment, or designed to provide a specific directive for prevention, diagnosis, or treatment.
  • The guidance lists 34 examples of systems the FDA intends to regulate including those that analyze medical images or signals from diagnostic devices, diagnose respiratory illness, forecast risk of an opioid addiction, estimate the severity of a heart attack, and estimate the best time for a Cesarean section.
  • The rules don’t cover systems that supply information without recommending care decisions. This includes systems that produce lists of diagnostic, follow-up, or treatment options; those that evaluate interactions among drugs and allergies; or those that generate patient discharge papers.
  • Developers who aim to dodge the medical-device requirements must provide to regulators and users plain-language descriptions of their algorithm’s logic and methods (including machine learning techniques), data (including collection sites, demographics, and practices), and results of clinical studies.

Behind the news: The guidance seeks to comply with a 2016 law that aimed to accelerate innovation in medical devices. The American Medical Informatics Association had petitioned regulators to clarify the law on several fronts.

  • The new guidance met some of their requests — for example, by explaining what should be included in plain-language descriptions and providing examples of systems that would and wouldn’t fall under the law.
  • However, it apparently bypassed other requests. For instance, it failed to define the difference between software that “informs” clinical management and software that “drives” it.

Why it matters: Regulators have struggled to interpret existing frameworks for oversight with respect to machine learning algorithms, whose functioning can change with ongoing training and whose output often can’t be clearly explained. The government’s new interpretation is a substantial step toward rules that protect patients without inhibiting innovation.

We're thinking: We welcome regulation of AI systems, particularly when they're involved in life-and-death decisions. However, clarity is paramount. To the extent that the difference between words like “informing” and “driving” clinical management remains vague, the new guidance highlights the need for caution. On the plus side, it will give many AI developers a clearer target to aim for.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox