Bayes-optimal algorithms always make the best decisions given their training and input, if certain assumptions hold true. New work shows that some neural networks can approach this kind of performance.

What’s new: DeepMind researchers led by Vladimir Mikulik showed that recurrent neural nets (RNNs) with meta-training, or training on several related tasks, behave like Bayes-optimal models.

Key insight: Theoretically, memory-based models like RNNs, given sufficient meta-training, become Bayes-optimal. To test this hypothesis, the researchers compared outputs and the internal states of both types of model.

How it works: The researchers meta-trained 14 RNNs on various prediction and reinforcement learning tasks. For instance, to predict the outcome of flipping a biased coin, the model observed coins with various biases. Then they compared each RNN to a known Bayes-optimal solution.

  • Each RNN comprised a fully connected layer, an LSTM layer, and a final fully connected layer. The authors trained the RNNs for 20 time steps, altered variables specific to the task at hand (such as the bias of the flipped coin), and repeated the process for a total of 10 million time steps. The corresponding Bayes-optimal models consisted of simple rules.
  • The authors fed the same input to RNN and Bayes-optimal models and compared their outputs. For prediction tasks, they compared KL divergence, a measure of similarity between two probability distributions. For reinforcement learning tasks, they compared cumulative reward.
  • To compare models’ internal representations, the authors recorded their hidden states and parameter values and used principal component analysis to reduce the RNNs’ dimensions to match the Bayes-optimal models. Then they trained two fully connected models to map RNN states to Bayes-optimal states and vice-versa, and measured their difference using mean-squared error.

Results: All RNNs converged to behave indistinguishably to their Bayes-optimal counterparts. For instance, the RNN that learned to predict biased coin flips achieved a KL divergence of 0.006 compared to 3.178 before meta-training. The internal states of RNNs and Bayes-optimal models matched nearly perfectly, differing in most tasks by a mean-squared error of less than 0.05.

Why it matters: Bayesian models are reputed to be provably optimal and interpretable. Compared to neural nets, though, they often require more engineering and vastly more computational power. This work involved toy problems in which a Bayes-optimal model could be written by hand, but it’s encouraging to find that meta-trained RNNs performed optimally, too.

We’re thinking: Maybe RNNs will become more popular here in the San Francisco Bayes Area.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox