For people with neurological disorders like epilepsy, attaching sensors to the scalp to measure electrical currents within the brain is benign. But interpreting the resulting electroencephalogram (EEG) graphs can give doctors a headache. Deep learning could help diagnose such conditions.

What’s new: Led by Hubert Banville, researchers at Université Paris-Saclay, InteraXon Inc., University of Helsinki, and Max Planck Institute applied self-supervised learning to extract features from unlabeled EEGs.

Key insight: EEGs labeled to identify stages of sleep, abnormal brain activity, and the like are hard to come by, but unlabeled data is plentiful. The self-supervised technique known as contrastive learning has potential in this domain.
How it works: The authors extracted features from unlabeled EEGs using three contrastive learning techniques: contrastive predictive coding (CPC) and two methods of their own invention. They used data from the Physionet Challenge 2018 (PC18), which labels sleep stages, and TUHab, which labels various types of abnormal brain activity.

  • An EEG is a time series of sensor measurements. CPC extracts features from an unlabeled sequence by training a model to distinguish consecutive measurements from non-consecutive ones.
  • In the technique known as relative positioning, a model samples a single sensor measurement, called the anchor, and a random measurement from elsewhere in a sequence. It extracts features by learning to determine whether or not the random sample falls within a preset time window around the anchor (between 1 and 40 minutes for sleep stage classification).
  • The technique called temporal shuffling teaches a model to learn the order in which samples are collected. The model samples two endpoints within a time window and a third from anywhere in the sequence. It extracts features by learning to classify whether or not the third sample came between the first two.

Results: The authors built simple models based on the extracted features and trained them to classify sleep stages and abnormal brain activity using limited amounts of labeled examples. The three techniques performed equally well. Using 10 percent of the labeled examples, they achieved a top accuracy of 72.3 percent on PC18 and 79.4 percent on TUHab.

Why it matters: The potential upside of using AI to interpret images in medical applications, where the expertise required to interpret them is relatively rare and expensive, is driving progress in learning approaches that don’t require so many labels. This work demonstrates progress in reading EEGs, but it comes with a caveat: Features clustered not only around stages of sleep but also the dates when the images were produced, which suggests that the algorithms recognized the products of particular technicians. Work remains to either make the AI more robust or eliminate the noise — likely both.

We’re thinking: If you think understanding artificial neural networks is difficult, you should talk with people who study biological neural networks!

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox