Self-Supervised Learning

8 Posts

Alexei Efros
Self-Supervised Learning

Alexei Efros: Learning from the ground up.

Things are really starting to get going in the field of AI. After many years (decades?!) of focusing on algorithms, the AI community is finally ready to accept the central role of data and the high-capacity models that are capable of taking advantage of this data.
Animated illustration shows the model architecture of a graph neural network.
Self-Supervised Learning

A Deeper Look at Graphs: Graph Neural Networks Work Better With More Layers

New research shows that drastically increasing the number of layers in a graph neural networks improves its performance on large datasets.
Electroencephalogram (EEG) and data related to contrastive predictive coding (CPC)
Self-Supervised Learning

Unlabeled Brainwaves Spill Secrets

For people with neurological disorders like epilepsy, attaching sensors to the scalp to measure electrical currents within the brain is benign. But interpreting the resulting electroencephalogram (EEG) graphs can give doctors a headache.
Graphs comparing SimCLR to SimCLRv2
Self-Supervised Learning

Fewer Labels, More Learning

Large models pretrained in an unsupervised fashion and then fine-tuned on a smaller corpus of labeled data have achieved spectacular results in natural language processing. New research pushes forward with a similar approach to computer vision.
Graph related to imple Contrastive Learning (SimCLR)
Self-Supervised Learning

Self-Supervised Simplicity

A simple linear classifier paired with a self-supervised feature extractor outperformed a supervised deep learning model on ImageNet, according to new research.
Association for the Advancement of Artificial Intelligence conference in New York
Self-Supervised Learning

Meeting of the Minds

Geoffrey Hinton, Yoshua Bengio, and Yann LeCun presented their latest thinking about deep learning’s limitations and how to overcome them.
Information related to Greedy InfoMax (GIM)
Self-Supervised Learning

Better Than Backprop

End-to-end backpropagation and labeled data are the peanut butter and chocolate of deep learning. However, recent work suggests that neither is necessary to train effective neural networks to represent complex data.
Yann LeCun
Self-Supervised Learning

Yann LeCun: Learning From Observation

How is it that many people learn to drive a car fairly safely in 20 hours of practice, while current imitation learning algorithms take hundreds of thousands of hours, and reinforcement learning algorithms take millions of hours? Clearly we’re missing something big.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox