University of Washington

7 Posts

Animated graphs showing how an ensemble of fine-tuned models can provide better performance.
University of Washington

Ensemble Models Simplified: New Machine Learning Research Simplifies Ensembles

A CLIP model whose weights were the mean of an ensemble of fine-tuned models performed as well as the ensemble and better than its best-performing constituent.
2 min read
Schematic of 8-bit optimizers via block-wise dynamic quantization
University of Washington

More Learning With Less Memory

Researchers discovered a new way to reduce memory requirements when training large machine learning models. Tim Dettmers and colleagues at University of Washington released 8-bit optimizers that store gradient statistics as 8-bit values, while maintaining the same accuracy.
2 min read
Animation showing how MERLOT is able to match contextualized captions with their corresponding video frames
University of Washington

Richer Video Representations: Pretraining Method Improves AI's Ability to Understand Video

To understand a movie scene, viewers often must remember or infer previous events and extrapolate potential consequences. New work improved a model’s ability to do the same.
2 min read
System Oscar+ working
University of Washington

Sharper Eyes For Vision+Language

Models that interpret the interplay of words and images tend to be trained on richer bodies of text than images. Recent research worked toward giving such models a more balanced knowledge of the two domains.
2 min read
Oren Etzioni
University of Washington

Oren Etzioni: Tools For Equality

In 2020, I hope the AI community will grapple with issues of fairness in ways that tangibly and directly benefit disadvantaged populations.
1 min read
Collage with photos of people's faces
University of Washington

Public Access, Private Faces

One of the largest open datasets for training face recognition systems has its roots in a popular photo-sharing service. Companies that have used this data could find themselves liable for millions in legal recompense.
2 min read
Bert and Ernie from Sesame Street
University of Washington

BERT Is Back

Less than a month after XLNet overtook BERT, the pole position in natural language understanding changed hands again. RoBERTa is an improved BERT pretraining recipe that beats its forbear, becoming the new state-of-the-art language model — for the moment.
2 min read

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox