The Black Box Has Dark Corners Will we ever understand what happens inside black box AI?

Published
Reading time
2 min read
Illustration of a hand putting candy on a trick or treat bag

Will we ever understand what goes on inside the mind of a neural network?

The fear: When AI systems go wrong, no one will be able to explain the reasoning behind their decisions. Imperceptible changes to a model’s input will lead unaccountably to fickle outputs. Seemingly well-designed systems will produce biased results without warning. People will suffer harm without explanation or recourse.

Behind the worries: Decisions made by neural networks are notoriously difficult to explain. In the real world, they have profoundly affected peoples’ lives. In the lab, they have made it impossible to trust their output — even when they showed highly accurate results.

  • Models deployed by U.S. state governments slashed healthcare benefits for thousands of people living in Arkansas and Idaho. The people affected typically couldn’t figure out why their care was cut. The process for appealing the decision wasn’t clear either.
  • A study of six neural networks designed to enhance low-resolution medical images found that they often altered the input in ways that made them unreliable as diagnostic tools. Deep learning systems, the authors concluded, provide no clues about the quality of input they require, and developers must tease out the limits experimentally.
  • A deep learning system accurately predicted the onset of psychiatric disorders like schizophrenia based on a patient’s medical record. However, the developers said their model wouldn’t be useful to doctors until they had a better idea about how it made its predictions.

How scared should you be: The inability to explain AI-driven decisions is keeping people from using the technology more broadly. For instance, in a recent survey of UK information technology workers in the financial services industry, 89 percent said that lack of transparency was the primary impediment to using AI. Europe’s General Data Protection Regulation gives citizens the right to obtain information on automated systems that make decisions affecting their lives. AI makers that can’t provide these details about their technology can face steep fines or outright bans.

What to do: Research into explaining neural network outputs has made substantial strides, but much more work is needed. Meanwhile, it’s imperative to establish standard procedures to ensure that models are built and deployed responsibly.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox