Less Labels, More Learning Improved small data performance with combined techniques

Published
Reading time
2 min read
FixMatch example

In small data settings where labels are scarce, semi-supervised learning can train models by using a small number of labeled examples and a larger set of unlabeled examples. A new method outperforms earlier techniques.

What’s new: Kihyuk Sohn, David Berthelot, and colleagues at Google Research introduced FixMatch, which marries two semi-supervised techniques.

Key insight: The technique known as pseudo labeling uses a trained model’s most confident predictions on unlabeled examples for subsequent supervised training. Consistency regularization penalizes a model if its predictions on two versions of the same data point — say, distorted variations on the same image — are dissimilar. Using these techniques in sequence enables a model to generalize insights gained from unlabeled data.

How it works: FixMatch learns from labeled and unlabeled data simultaneously. It learns from a small set of labeled images in typical supervised fashion. It learns from unlabeled images as follows:

  • FixMatch modifies unlabeled examples with a simple horizontal or vertical translation, horizontal flip, or other basic translation. The model classifies these weakly augmented images. If its confidence exceeds a user-defined threshold, the predicted class becomes a pseudo label.
  • FixMatch generates strongly augmented versions of the pseudo-labeled images by applying either RandAugment (which samples image augmentations randomly from a predefined set) or CTAugment (which learns an augmentation strategy as the model trains). Then it applies Cutout, which removes portions randomly.
  • The new model learns to classify the strongly augmented images consistently with the pseudo labels of the images they’re based on.

Results: FixMatch achieved state-of-the-art performance for semi-supervised learning on several benchmarks devised by the researchers. (They removed labels from popular image datasets to create training sets with between four and 400 labels per class.) An alternative semi-supervised approach performed slightly better on some benchmarks, though it’s not obvious under what circumstances it would be the better choice.

Why it matters: Google Research has been pushing the envelope of semi-supervised learning for image classification with a series of better and better algorithms. FixMatch outperforms its predecessors in the majority of comparisons, and its simplicity is appealing.

We’re thinking: Small data techniques promises to open the door to many new applications of AI, and we welcome any progress in this area.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox