Humans understand the world by abstraction: If you grasp the concept of grabbing a stick, then you’ll also comprehend grabbing a ball. New work explores deep learning agents’ ability to do the same thing — an important aspect of their ability to generalize.

What’s new: Psychologists call this kind of thinking systematic reasoning. Researchers at DeepMind, Stanford, and University College London studied this capability in deep reinforcement learning models trained to interact with an environment and complete a task.

Key insight: Felix Hill and colleagues trained a model to put object 1 on location 1 with an example of that action being performed. At test time, they asked the model to put object 2 on location 2. Object 2 and location 2 weren’t in the training set, so the model’s ability to execute the task would indicate a generalized understanding of putting.

How it works: The model receives a view of the environment along with a task description (an instruction to put or find a given object). The model processes these elements separately, then combines its understanding of each to determine a series of actions to complete the task.

  • The model comprises three components (the usual choices for image processing, text understanding, and sequence decisions): A CNN processes the environment view, an LSTM interprets the task description, and the CNN and LSTM outputs merge in a hidden LSTM layer to track progress toward completing the task.
  • The model learns to associate various objects with their names by executing put [object] or find [object] tasks.
  • The researchers separate objects into test and training sets. Then they train the model to put or lift objects in the training set.
  • To measure systematic reasoning, they ask it to lift or put objects in the test set.

Results: The researchers trained copies of the model in simulated 2D and 3D environments. It was over 91 percent successful in lifting novel objects either way. However, success at putting novel objects dropped to about 50 percent in both environments.

Yes, but: Removing the task description and LSTM component didn’t degrade performance much. That is, while words such as put and find may help humans understand how neural networks operate systematically, language apparently isn’t critical to their performance.

Why it matters: Neural networks are able to generalize, but our understanding of how they do it is incomplete. This research offers a way to evaluate the role of systematic reasoning. The results imply that models that reason systematically are more likely to generalize.

Takeaway: The recent run of pretrained language models acquire knowledge that enables them to perform a variety of tasks without retraining from scratch. Understanding systematic reasoning in neural networks could lead to better performance in domains outside of natural language.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox