Pain Points in Black and White

A model designed to assess medical patients’ pain levels matched the patients’ own reports better than doctors’ estimates did — when the patients were Black.
Examples of InstaHide scrambling images

A Privacy Threat Revealed

With access to a trained model, an attacker can use a reconstruction attack to approximate its training data. A method called InstaHide recently won acclaim for promising to make such examples unrecognizable to human eyes while retaining their utility for training.
Graphs comparing SGD + Momentum, Adam and AdaBelief

Striding Toward the Minimum

When you’re training a deep learning model, it can take days for an optimization algorithm to minimize the loss function. A new approach could save time.
Examples of contrastive learning

Learning From Words and Pictures

It’s expensive to pay doctors to label medical images, and the relative scarcity of high-quality training examples can make it hard for neural networks to learn features that make for accurate diagnoses.
Face recognition system working on a bear

Caught Bearfaced

Many people worry that face recognition is intrusive, but wild animals seem to find it bearable. Melanie Clapham at University of Victoria with teammates of the BearID Project developed a model that performs face recognition for brown bears.
Graphs showing how DeepRhythm detects deepfakes

Deepfakes Are Heartless

The incessant rhythm of a heartbeat could be the key to distinguishing real videos from deepfakes. DeepRhythm detects deepfakes using an approach inspired by the science of measuring minute changes on the skin’s surface due to blood circulation.
Data and examples related to a new technique to detect portions of an image

The Telltale Artifact

Deepfakes have gone mainstream, allowing celebrities to star in commercials without setting foot in a film studio. A new method helps determine whether such endorsements — and other images produced by generative adversarial networks — are authentic.
Examples of Generative Adversarial Networks used for image to illustration translation

Style and Substance

GANs are adept at mapping the artistic style of one picture onto the subject of another, known as style transfer. However, applied to the fanciful illustrations in children’s books, some GANs prove better at preserving style, others better at preserving subject matter.
Graphs comparing SimCLR to SimCLRv2

Fewer Labels, More Learning

Large models pretrained in an unsupervised fashion and then fine-tuned on a smaller corpus of labeled data have achieved spectacular results in natural language processing. New research pushes forward with a similar approach to computer vision.
Graphs and data related to semi-supervised learning

All Examples Are Not Equal

Semi-supervised learning — a set of training techniques that use a small number of labeled examples and a large number of unlabeled examples — typically treats all unlabeled examples the same way. But some examples are more useful for learning than others.
Excerpt from study about models that learn to predict task-specific distance metrics

Misleading Metrics

A growing body of literature shows that some steps in AI’s forward march may actually move sideways. A new study questions advances in metric learning.
Data and graphs related to a method that synthesizes extracted features of underrepresented classes

Augmentation for Features

In any training dataset, some classes may have relatively few examples. A new technique can improve a trained model’s performance on such underrepresented classes. Researchers introduced a method that synthesizes extracted features of underrepresented classes.
Data and graphs related to teacher networks

Flexible Teachers, Smarter Students

Human teachers can teach more effectively by adjusting their methods in response to student feedback. It turns out that teacher networks can do the same.
Image processing technique explained

Preserving Detail in Image Inputs

Given real-world constraints on memory and processing time, images are often downsampled before they’re fed into a neural network. But the process removes fine details, and that degrades accuracy. A new technique squeezes images with less compromise.
Data and graphs related to batch normalization

Outside the Norm

Batch normalization is a technique that normalizes layer outputs to accelerate neural network training. But new research shows that it has other effects that may be more important.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox