Simple Contrastive Learning (SimCLR)

4 Posts

Collage of self portraits
Simple Contrastive Learning (SimCLR)

Unsupervised Prejudice

Social biases are well documented in decisions made by supervised models trained on ImageNet’s labels. But they also crept into the output of unsupervised models pretrained on the same dataset.
Graphs comparing SimCLR to SimCLRv2
Simple Contrastive Learning (SimCLR)

Fewer Labels, More Learning

Large models pretrained in an unsupervised fashion and then fine-tuned on a smaller corpus of labeled data have achieved spectacular results in natural language processing. New research pushes forward with a similar approach to computer vision.
Data and information related to Contrastive Unsupervised Representations for Reinforcement Learning (CURL)
Simple Contrastive Learning (SimCLR)

RL and Feature Extraction Combined

Which comes first, training a reinforcement learning model or extracting high-quality features? New work avoids this chicken-or-egg dilemma by doing both simultaneously.
Graph related to imple Contrastive Learning (SimCLR)
Simple Contrastive Learning (SimCLR)

Self-Supervised Simplicity

A simple linear classifier paired with a self-supervised feature extractor outperformed a supervised deep learning model on ImageNet, according to new research.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox