Neuroscientists once thought they could train rats to navigate mazes by color. It turns out that rats don’t perceive colors at all. Instead, they rely on the distinct odors of different colors of paint. New work finds that neural networks are especially prone to this sort of misalignment between training goals and learning.

What’s new: Robert Geirhos, Jörn-Henrik Jacobsen, and Claudios Michaelis led a study of neural network hiccups conducted by the University of Tübingen, Max Planck Research School for Intelligent Systems, and the University of Toronto. They argue that many of deep learning’s shortcomings reveal shortcut learning.

Key insight: Shortcuts are pathways to solving a problem that result in good performance on standard benchmarks but don’t require understanding of the problem and therefore don’t transfer well to real-world situations.

How it works: The authors identify apparent causes of shortcut learning in neural networks, circumstances that tend to encourage it, and techniques available to discourage it.

  • Dataset bias can cause models to focus on spurious correlations rather than valid relationships. For instance, cows often stand in pastures, so black, white, and green textures can indicate their presence — but a lawn is not an identifying mark of cattle. Models have a hard time learning true bovine characteristics when their training data offers this simpler approach.
  • Training data may be free of spurious correlations and still fail to represent the task at hand. For example, cats have fur while elephants have wrinkled skin, so an animal classifier may wind up becoming a texture detector instead.
  • To address such issues, the authors propose training and testing on out-of-distribution, augmented, and adversarial examples. If a model incorrectly recognizes a test sample that has been altered to change, say, the color of grass from green to brown, it’s likely the model relied on shortcuts.
  • In the animal classification tasks described above, domain experts can make sure the training set depicts animals in a variety of scenes and breeds such as hairless cats that exhibit a range of textures.

Why it matters: The authors shed light on an issue that has troubled machine learning engineers for decades and highlight the lack of robustness of current algorithms. Addressing these issues will be key to scaling up practical neural network deployments.

We’re thinking: Humans also use shortcuts; we’ve all memorized formulas by rote instead of fully understanding them. Our misbehaving models may be more like us than we’d like to admit.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox