Published
Reading time
1 min read

A model designed to assess medical patients’ pain levels matched the patients’ own reports better than doctors’ estimates did — when the patients were Black.

What’s new: Black people who suffer from osteoarthritis, or loss of cartilage in the joints, tend to report higher levels of pain than White patients who have the same condition. To understand why, researchers at Microsoft, Stanford University, and other institutions trained a model to predict the severity of a patient’s pain from a knee x-ray. The model predicted self-reports by Black patients more accurately than a grading system commonly used by radiologists.

How it works: The researchers began with a ResNet-18 pretrained on ImageNet. They fine-tuned it to predict pain levels from x-rays using 25,049 images and corresponding pain reports from 2,877 patients. 16 percent of the patients were Black.

  • The researchers evaluated x-rays using their model and also asked radiologists to assign them a Kellgren-Lawrence grade, a system for visually assessing the severity of joint disease.
  • Compared with the Kellgren-Lawrence grades, the model’s output showed 43 percent less disparity between pain reported by Black and White patients.
  • The researchers couldn’t determine what features most influenced the model’s predictions.

Behind the news: The Kellgren-Lawrence grade is based on a 1957 study of a relatively small group of people, nearly all of whom were White. The system often underestimates pain levels reported by Black patients.

Why it matters: Chronic knee pain hobbles millions of Americans, but Black patients are less likely than White ones to receive knee replacement surgery. Studies have shown that systems like the Kellgren-Lawrence grade often play an outsize role in doctors’ decisions to recommend surgery. Deep learning offers a way to narrow this gap in care and could be adapted to address other healthcare discrepancies.

We’re thinking: Algorithms used in healthcare have come under scrutiny for exacerbating bias. It’s good to see one that diminishes it.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox