Examples of InstaHide scrambling images
ResNet

A Privacy Threat Revealed

With access to a trained model, an attacker can use a reconstruction attack to approximate its training data. A method called InstaHide recently won acclaim for promising to make such examples unrecognizable to human eyes while retaining their utility for training.
2 min read
Graphs comparing SGD + Momentum, Adam and AdaBelief
ResNet

Striding Toward the Minimum

When you’re training a deep learning model, it can take days for an optimization algorithm to minimize the loss function. A new approach could save time.
2 min read
Examples of contrastive learning
ResNet

Learning From Words and Pictures

It’s expensive to pay doctors to label medical images, and the relative scarcity of high-quality training examples can make it hard for neural networks to learn features that make for accurate diagnoses.
2 min read
Face recognition system working on a bear
ResNet

Caught Bearfaced

Many people worry that face recognition is intrusive, but wild animals seem to find it bearable. Melanie Clapham at University of Victoria with teammates of the BearID Project developed a model that performs face recognition for brown bears.
1 min read
Graphs showing how DeepRhythm detects deepfakes
ResNet

Deepfakes Are Heartless

The incessant rhythm of a heartbeat could be the key to distinguishing real videos from deepfakes. DeepRhythm detects deepfakes using an approach inspired by the science of measuring minute changes on the skin’s surface due to blood circulation.
1 min read
Data and examples related to a new technique to detect portions of an image
ResNet

The Telltale Artifact

Deepfakes have gone mainstream, allowing celebrities to star in commercials without setting foot in a film studio. A new method helps determine whether such endorsements — and other images produced by generative adversarial networks — are authentic.
2 min read
Examples of Generative Adversarial Networks used for image to illustration translation
ResNet

Style and Substance

GANs are adept at mapping the artistic style of one picture onto the subject of another, known as style transfer. However, applied to the fanciful illustrations in children’s books, some GANs prove better at preserving style, others better at preserving subject matter.
2 min read
Graphs comparing SimCLR to SimCLRv2
ResNet

Fewer Labels, More Learning

Large models pretrained in an unsupervised fashion and then fine-tuned on a smaller corpus of labeled data have achieved spectacular results in natural language processing. New research pushes forward with a similar approach to computer vision.
2 min read
Graphs and data related to semi-supervised learning
ResNet

All Examples Are Not Equal

Semi-supervised learning — a set of training techniques that use a small number of labeled examples and a large number of unlabeled examples — typically treats all unlabeled examples the same way. But some examples are more useful for learning than others.
2 min read
Excerpt from study about models that learn to predict task-specific distance metrics
ResNet

Misleading Metrics

A growing body of literature shows that some steps in AI’s forward march may actually move sideways. A new study questions advances in metric learning.
2 min read
Data and graphs related to a method that synthesizes extracted features of underrepresented classes
ResNet

Augmentation for Features

In any training dataset, some classes may have relatively few examples. A new technique can improve a trained model’s performance on such underrepresented classes. Researchers introduced a method that synthesizes extracted features of underrepresented classes.
2 min read
Data and graphs related to teacher networks
ResNet

Flexible Teachers, Smarter Students

Human teachers can teach more effectively by adjusting their methods in response to student feedback. It turns out that teacher networks can do the same.
2 min read
Image processing technique explained
ResNet

Preserving Detail in Image Inputs

Given real-world constraints on memory and processing time, images are often downsampled before they’re fed into a neural network. But the process removes fine details, and that degrades accuracy. A new technique squeezes images with less compromise.
2 min read
Data and graphs related to batch normalization
ResNet

Outside the Norm

Batch normalization is a technique that normalizes layer outputs to accelerate neural network training. But new research shows that it has other effects that may be more important.
2 min read
Graph related to imple Contrastive Learning (SimCLR)
ResNet

Self-Supervised Simplicity

A simple linear classifier paired with a self-supervised feature extractor outperformed a supervised deep learning model on ImageNet, according to new research.
2 min read

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox