Geoffrey Hinton, Yoshua Bengio, and Yann LeCun presented their latest thinking about deep learning’s limitations and how to overcome them.

What’s new: The deep-learning pioneers discussed how to improve machine learning, perception, and reasoning at the Association for the Advancement of Artificial Intelligence conference in New York.

What they said: Deep learning needs better ways to understand the world, they said. Each is working toward that goal from a different angle:

  • Bengio, a professor at the Université de Montréal, observed that deep learning’s results are analogous to the type of human thought — which cognitive scientists call system 1 thinking — that is habitual and occurs below the surface of consciousness. He aims to develop systems capable of the more attentive system 2 thinking that enables people to respond effectively to novel situations. He described a consciousness prior, made up of high-level variables with a high probability of being true, that would enable AI agents to track abstract changes in the world. That way, understanding, say, whether a person is wearing glasses would be a matter of one bit rather than many pixels.
  • Hinton, who divides his time between Google Brain and the University of Toronto, began by noting the limitations of convolutional neural networks when it comes to understanding three-dimensional objects. The latest version of his stacked-capsule autoencoder is designed to overcome that issue. The model learns to represent objects independently of their orientation in space, so it can recognize objects despite variations in point of view, lighting, or noise.
  • Facebook AI chief LeCun noted that, while supervised learning has accomplished amazing things, it requires tremendous amounts of labeled data. Self-supervised learning, in which a network learns by filling in blanks in input data, opens new vistas. This technique has had great success in language models, but it has yet to work well in visual tasks. To bridge the gap, LeCun is betting on energy-based models that measure compatibility between an observation (such as a segment of video) and the desired prediction (such as the next frame).

Behind the news: The Association for Computing Machinery awarded Bengio, Hinton, and LeCun the 2018 A. M. Turing Award for their work. The association credits the trio’s accomplishments, including breakthroughs in backpropagation, computer vision, and natural language processing, with reinvigorating AI.

Words of wisdom: Asked by the moderator what students of machine learning should read, Hinton offered the counterintuitive observation that “reading rots the mind.” He recommended that practitioners figure out how they would solve a given problem, and only then read about how others solved it.

We’re thinking: We apologize for rotting your mind.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox