Carnegie Mellon University

12 Posts

Flowcharts show how a new contrastive learning approach uses metadata to improve AI image classifiers
Carnegie Mellon University

Learning From Metadata: Descriptive Text Improves Performance for AI Image Classification Systems

Images in the wild may not come with labels, but they often include metadata. A new training method takes advantage of this information to improve contrastive learning.
A series of graphs show the carbon emissions associated with training AI models.
Carnegie Mellon University

Cutting the Carbon Cost of Training: A New Tool Helps NLP Models Lower Their Gas Emissions

You can reduce your model’s carbon emissions by being choosy about when and where you train it.
Animation showing probability of children who may benefit from intervention
Carnegie Mellon University

Child-Welfare Agency Drops AI: Oregon and Pennsylvania Halt Use of AI Tool for At-Risk Kids

Officials in charge of protecting children stopped using a machine learning model designed to help them make decisions in difficult cases. The U.S. state of Oregon halted its use of an algorithm intended to identify children who may benefit from intervention.
Illustration of how different data split strategies partition the labelled data
Carnegie Mellon University

Fine-Tune Your Fine-Tuning: New method optimizes training for few shot NLP models.

Let’s say you have a pretrained language model and a small amount of data to fine-tune it to answer yes-or-no questions. Should you fine-tune it to classify yes/no or to fill in missing words — both viable approaches that are likely to yield different results?
A four-legged robot walking over difficult and changing terrain
Carnegie Mellon University

Walking the Dog

A reinforcement learning system enabled a four-legged robot to amble over unfamiliar, rapidly changing terrain.
Neural networks generating novel views of a 3D scene based on existing pictures
Carnegie Mellon University

3D Scene Synthesis for the Real World

Researchers have used neural networks to generate novel views of a 3D scene based on existing pictures plus the positions and angles of the cameras that took them. In practice, though, you may not know the precise camera
Collage of self portraits
Carnegie Mellon University

Unsupervised Prejudice

Social biases are well documented in decisions made by supervised models trained on ImageNet’s labels. But they also crept into the output of unsupervised models pretrained on the same dataset.
Data related to Covid-19 symptoms prediction
Carnegie Mellon University

Cats Cured of Covid

Neural networks are famously bad at interpreting input that falls outside the training set’s distribution, so it’s not surprising that some models are certain that cat pictures show symptoms of Covid-19. A new approach won’t mistakenly condemn your feline to a quarantine.
Data and graphs related to teacher networks
Carnegie Mellon University

Flexible Teachers, Smarter Students

Human teachers can teach more effectively by adjusting their methods in response to student feedback. It turns out that teacher networks can do the same.
Graph related to Mixture of Softmaxes (MoS)
Carnegie Mellon University

Upgrading Softmax

Softmax commonly computes probabilities in a classifier’s output layer. But softmax isn’t always accurate in complex tasks — say, in a natural-language task, when the length of word vectors is much smaller than the number of words in the vocabulary.
Illustration of a fireplace with "Happy holidays" cards in English, Spanish and French
Carnegie Mellon University

Natural Language Processing Models Get Literate: Top NLP Advances in 2019

Earlier language models powered by Word2Vec and GloVe embeddings yielded confused chatbots, grammar tools with middle-school reading comprehension, and not-half-bad translations. The latest generation is so good, some people consider it dangerous.
Graph related to Noisy Student performance on ImageNet
Carnegie Mellon University

Self-Training for Sharper Vision

The previous state-of-the-art image classifier was trained on the ImageNet dataset plus 3.5 billion supplemental images from a different database. A new method achieved higher accuracy with one-tenth as many supplemental examples — and they were unlabeled, to boot.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox