Collage of self portraits
Unsupervised Learning

Unsupervised Prejudice

Social biases are well documented in decisions made by supervised models trained on ImageNet’s labels. But they also crept into the output of unsupervised models pretrained on the same dataset.
Graphs comparing SimCLR to SimCLRv2
Unsupervised Learning

Fewer Labels, More Learning

Large models pretrained in an unsupervised fashion and then fine-tuned on a smaller corpus of labeled data have achieved spectacular results in natural language processing. New research pushes forward with a similar approach to computer vision.
Illustration of two translators on a scale
Unsupervised Learning

Choosing Words Carefully: NLP for Translation Chooses the Right Synonym

The words “big” and “large” have similar meanings, but they aren’t always interchangeable: You wouldn’t refer to an older, male sibling as your “large brother” (unless you meant to be cheeky). Choosing among words with similar meanings is critical in language tasks like translation.
Talking bubbles inside talking bubbles
Unsupervised Learning

Bigger is Better

Natural language processing lately has come to resemble an arms race, as the big AI companies build models that encompass ever larger numbers of parameters. Microsoft recently held the record — but not for long.
Images containing various atlas and graphs
Unsupervised Learning

Underwater Atlas

The ocean contains distinct ecosystems, but they’re much harder to see than terrestrial forests or savannas. A new model helps scientists better understand patterns of undersea life, which is threatened by pollution, invasive species, and warming temperatures.
Series of images and data related to a tool that maps where hurricanes and other calamities have wiped out roads
Unsupervised Learning

Roads to Recovery

Deep learning promises to help emergency responders find their way through disaster zones. MIT researchers developed a tool that maps where hurricanes and other calamities have wiped out roads, helping to show aid workers the fastest ways to get to people in need.
Information related to Greedy InfoMax (GIM)
Unsupervised Learning

Better Than Backprop

End-to-end backpropagation and labeled data are the peanut butter and chocolate of deep learning. However, recent work suggests that neither is necessary to train effective neural networks to represent complex data.
Anima Anandkumar
Unsupervised Learning

Anima Anandkumar: The Power of Simulation

We’ve had great success with supervised deep learning on labeled data. Now it’s time to explore other ways to learn: training on unlabeled data, lifelong learning, and especially letting models explore a simulated environment before transferring what they learn to the real world.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox