Graphs showing how DeepRhythm detects deepfakes
Machine Learning Research

Deepfakes Are Heartless: AI detects deepfaked videos by their lack of heartbeat.

The incessant rhythm of a heartbeat could be the key to distinguishing real videos from deepfakes. DeepRhythm detects deepfakes using an approach inspired by the science of measuring minute changes on the skin’s surface due to blood circulation.
John Conway's Game of Life
Machine Learning Research

Life Is Easier for Big Networks: Neural networks learn better with more parameters.

According to the lottery ticket hypothesis, the bigger the neural network, the more likely some of its weights are initialized to values that are well suited to learning to perform the task at hand. But just how big does it need to be?
Graphs and examples of Network dissection technique
Machine Learning Research

What One Neuron Knows: How convolutional neural network layers recognize objects.

How does a convolutional neural network recognize a photo of a ski resort? New research shows that it bases its classification on specific neurons that recognize snow, mountains, trees, and houses.
Examples of AI generated images
Machine Learning Research

GANs for Smaller Data: Training GANs on small data without overfitting

Trained on a small dataset, generative adversarial networks (GANs) tend to generate either replicas of the training data or noisy output. A new method spurs them to produce satisfying variations.
Graphs and data related to RubiksShift
Machine Learning Research

More Efficient Action Recognition: Using Active Shift Layer to analyze time series data

Recognizing actions performed in a video requires understanding each frame and relationships between the frames. Previous research devised a way to analyze individual images efficiently known as Active Shift Layer (ASL). New research extends this technique to the steady march of video frames.
Data and examples related to IMLE-GAN
Machine Learning Research

Making GANs More Inclusive: A technique to help GANs represent their datasets fairly

A typical GAN’s output doesn’t necessarily reflect the data distribution of its training set. Instead, GANs are prone to modeling the majority of the training distribution, sometimes ignoring rare attributes — say, faces that represent minority populations.
Data and examples related to a new technique to detect portions of an image
Machine Learning Research

The Telltale Artifact: A technique for detecting GAN-generated deepfakes

Deepfakes have gone mainstream, allowing celebrities to star in commercials without setting foot in a film studio. A new method helps determine whether such endorsements — and other images produced by generative adversarial networks — are authentic.
Examples of Generative Adversarial Networks used for image to illustration translation
Machine Learning Research

Style and Substance: An improved GAN technique for style transfer

GANs are adept at mapping the artistic style of one picture onto the subject of another, known as style transfer. However, applied to the fanciful illustrations in children’s books, some GANs prove better at preserving style, others better at preserving subject matter.
Graphs related to different attention mechanisms
Machine Learning Research

More Efficient Transformers: BigBird is an efficient attention mechanism for transformers.

As transformer networks move to the fore in applications from language to vision, the time it takes them to crunch longer sequences becomes a more pressing issue. A new method lightens the computational load using sparse attention.
Example of Occupancy Anticipation, a navigation system that predicts unseen obstacles, working
Machine Learning Research

Guess What Happens Next: Research teaches robots to predict unseen obstacles.

New research teaches robots to anticipate what’s coming rather than focusing on what’s right in front of them. Researchers developed Occupancy Anticipation (OA), a navigation system that predicts unseen obstacles in addition to observing those in its field of view.
Bert (muppet) and information related to BERT (transformer-based machine learning technique)
Machine Learning Research

Do Muppets Have Common Sense?: The Bert NLP model scores high on common sense test.

Two years after it pointed a new direction for language models, Bert still hovers near the top of several natural language processing leaderboards. A new study considers whether Bert simply excels at tracking word order or or learns something closer to common sense.
Electroencephalogram (EEG) and data related to contrastive predictive coding (CPC)
Machine Learning Research

Unlabeled Brainwaves Spill Secrets: Deep learning helps doctors interpret EEGs.

For people with neurological disorders like epilepsy, attaching sensors to the scalp to measure electrical currents within the brain is benign. But interpreting the resulting electroencephalogram (EEG) graphs can give doctors a headache.
Graphs comparing SimCLR to SimCLRv2
Machine Learning Research

Fewer Labels, More Learning: How SimCLRv2 improves image recognition with fewer labels

Large models pretrained in an unsupervised fashion and then fine-tuned on a smaller corpus of labeled data have achieved spectacular results in natural language processing. New research pushes forward with a similar approach to computer vision.
Graphs related to a comparison and evaluation of 14 different optimizers
Machine Learning Research

Optimizer Shootout: An evaluation of 14 deep learning optimizers

Everyone has a favorite optimization method, but it’s not always clear which one works best in a given situation. New research aims to establish a set of benchmarks. Researchers evaluated 14 popular optimizers using the Deep Optimization Benchmark Suite some of them introduced last year.
Data and information related to dropout
Machine Learning Research

Dropout With a Difference: Reduce neural net overfitting without impacting accuracy

The technique known as dropout discourages neural networks from overfitting by deterring them from reliance on particular features. A new approach reorganizes the process to run efficiently on the chips that typically run neural network calculations.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox