Cancer in the Crosshairs An AI model beat humans in diagnosing breast cancer from X-rays, but critics found the study lacking.

Published
Reading time
2 min read
Breast cancer screening

Computer vision has potential to spot cancer earlier and more accurately than human experts. A new system surpassed human accuracy in trials, but critics aren’t convinced.

What’s new: A computer vision model for diagnosing breast cancer outperformed radiologists in the U.S. and UK, according to a study published in Nature. The announcement, however, met with skepticism from some experts.

How it works: Researchers at Google Health, DeepMind, and other organizations trained a model on 76,000 X-ray images from one U.S. clinic and 30,000 from two UK screening centers. Each image came with data from a follow up visit at least a year later, when doctors either confirmed or ruled out a tumor. The researchers graded the model’s accuracy against average diagnostic accuracy in each country’s health care system.

  • A single radiologist had checked U.S. mammograms. Compared with the radiologist, the model produced 9.4 percent fewer false negatives and 5.7 percent fewer false positives.
  • In the UK, two human radiologists typically screened each mammogram. Compared to that more rigorous system, the model produced 2.7 percent fewer false negatives and 1.2 fewer false positives.
  • The researchers also recruited six U.S. radiologists to analyze 500 of the images. The model outperformed that panel, particularly with respect to more invasive cancers. But it also missed a tumor that all six radiologists found.

Yes, but: The study faced criticism that the dataset, model, and procedural details were not available to researchers aiming to reproduce its results. Moreover, experts said the images used in the new study didn’t adequately represent the at-risk population, according to the Advisory Board, a healthcare consultancy. Incidence of breast cancer in the sample dataset was higher than average, and the images weren’t annotated with the patients’ genetic heritage — which could skew the results, because some ethnic groups are at greater risk of developing tumors.

Behind the news: Google’s study overshadowed earlier results from NYU, where researchers trained a similar model to detect cancer in mammograms. Their model scored highly on images that had been verified independently, and it matched the performance of a panel of 12 radiologists. The researchers also found that a hybrid model — which averaged a human radiologist’s decision with the model’s prediction — outperformed either one separately.

Why it matters: Worldwide, breast cancer accounts for 12 percent of all cancer cases. The disease has been on the rise since 2008, with confirmed cases increasing by 20 percent and mortality by 14 percent. Meanwhile, the UK suffers a shortage of trained radiologists. Effective AI-driven detection could save countless lives.

We’re thinking: Google and NYU are both making strides in computer vision for medical diagnosis, though clearly Google has a much larger PR team. We urge reporters to cover a diverse range of AI projects.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox