Neural networks are famously bad at interpreting input that falls outside the training set’s distribution, so it’s not surprising that some models are certain that cat pictures show symptoms of Covid-19. A new approach won’t mistakenly condemn your feline to a quarantine.

What’s new: Led by Ankur Mallick, researchers at Carnegie Mellon and Lawrence Livermore National Lab developed Probabilistic Neighborhood Components Analysis (PNCA) to help models estimate the confidence of their predictions.

Key insight: Neural networks often show high confidence in predictions that are clearly incorrect — a major issue in areas like healthcare and criminal justice. The problem can fade with enough training data, but it’s pervasive where training data is scarce. Overfitting limited data contributes to overconfidence, so combining deep learning with probabilistic methods, which are less prone to overfitting, might alleviate overconfidence.

How it works: PNCA is a probabilistic version of Neighborhood Component Analysis. NCA is a supervised learning method that trains neural nets to extract features that cluster examples of the same class. NCA determines the class of novel input by computing the distance between training data features and input features. It takes the softmax of the distances to obtain the probability that each training example belongs to the same class of the novel input. Practically speaking, NCA is a classification network with fixed output layer weights, but not size, given by the distance function.

  • PNCA borrows ideas from deep Bayesian networks, which interpret inputs, weights, extracted features, neighborhoods, and class predictions as samples of probability distributions. The use of probability distributions allows PNCA to sharpen its confidence by computing the probability that a particular classification would occur with the provided input.
  • The technique estimates the distribution of predicted classes by sampling weights from the weight distribution. Every pair of sampled weights and training examples determines a distinct extracted feature, so running the usual NCA on every pair yields a classification that depends on the weight distribution.
  • PNCA determines the entire weight distribution by maintaining a sample of weights. Then it trains the sample of weights to generate a sample of predictions to match the training data, updating the weights to minimize the NCA loss.

Results: The researchers trained PNCA on a Kaggle dataset of chest x-rays showing Covid-19, and tested it on Covid-V2 and a Cats and Dogs dataset. PNCA performed with similar accuracy to other deep learning approaches on Covid-V2, while incorrectly classifying 1,000 cats and dogs out of 25,000 as Covid-19 with high confidence. This may seem like poor performance, but the same architecture with a standard supervised learning objective mistook around 2500 cats and dogs as Covid-19 chest x-rays.

Why it matters: Deep learning’s overconfidence and data hunger are limitations to their practical deployment. PNCA combines deep learning’s powerful feature extraction with a probabilistic ability to quantify uncertainty.

We’re thinking: We’re waiting for a model that can tell us the condition of Schroedinger’s cat.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox