Machine learning stumbles in diagnosing many medical conditions based on imagery if little labeled data is available. In small data settings, supervised learning is often unable to train an accurate classifier. But a twist on such methods dramatically boosts accuracy — without additional hand-labeled data.
What’s new: Researchers at Stanford developed an improved diagnostic model for bicuspid aortic valve, a dangerous heart deformity, based on a database of MRI videos.
Key insight: Medical data typically is aggregated from many doctors and clinics, leading to inconsistent labels. Jason Fries and his teammates sidestepped that problem via weakly supervised learning, an emerging approach that doesn’t rely on labeled data. The trick is to use a separate model to produce a preliminary label and confidence score for every training example.
How it works: The labeler predicts initial diagnoses along with confidence levels, creating imprecise, or noisy, labels. A neural network uses the raw MRI and noisy labels to predict a final diagnosis.
- The labeler includes five algorithms that don’t learn. Rather, a data scientist sets them manually to compute the aorta’s area, perimeter length, resemblance to a circle, amount of blood flow, and ratio between area and perimeter. Using these values, each algorithm produces a prediction for every frame.
- These features are mapped using a simple probabilistic model to a noisy label and a degree of confidence.
- The noisy labels are used for training, with the loss function weighting higher-confidence labels more heavily.
Results: The weakly-supervised diagnostic model diagnosed BAV correctly in 83 percent of patients. The previous fully-supervised method achieved only 30 percent.
Why it matters: BAV is severely underdiagnosed (only one-sixth of sufferers had been labeled with the correct diagnosis in the training data set). Its day-to-day effects include fatigue, shortness of breath, and chest pain. Moreover, using long-term data on health outcomes, researchers discovered that patients whom their model diagnosed with BAV had twice the risk of a major cardiac event later in life. Having a correct diagnosis up-front clearly could be a life-saver.
Takeaway: General practitioners aren’t likely to detect rare, hidden conditions as accurately as specialists. But patients don’t schedule appointments with specialists unless they get a GP’s referral. Models like this one could help solve that chicken-or-egg problem by bringing powerful diagnostic tools to common checkups.