Brains and neural signals reading
Machine Learning Research

Listening to the Brain: NLP researchers used RNNs to translate brain waves into text.

AI has added an unlikely language to its catalog of translation capabilities: brain waves. Joseph Makin led a group from the University of California San Francisco to render a person’s neural signals as English text while the person while the person read a sentence aloud.
Examples of recognition of real and fake images
Machine Learning Research

Fake Detector: Using a discriminator network to spot deepfakes

AI’s ability to produce synthetic pictures that fool humans into believing they’re real has spurred a race to build neural networks that can tell the difference. Recent research achieved encouraging results.
Data and information related to Contrastive Unsupervised Representations for Reinforcement Learning (CURL)
Machine Learning Research

RL and Feature Extraction Combined: CURL combines reinforcement with contrastive learning.

Which comes first, training a reinforcement learning model or extracting high-quality features? New work avoids this chicken-or-egg dilemma by doing both simultaneously.
Data and information related to people detection in a room with a Wi-Fi router
Machine Learning Research

Room With a View: AI detects humans from Wi-Fi disturbances.

Your body disturbs Wi-Fi signals as you move through them. New research takes advantage of the effect to recognize the presence of people. Yang Liu and colleagues at Syracuse University detected people in a room with a Wi-Fi router by analyzing the signal.
Graphs and data related to Scan2Plan, a model that segments 3D scans of empty indoor spaces into floor plans
Machine Learning Research

Finding a Floor Plan: Scan2Plan helps vacuum robots create interior maps.

Robot vacuum cleaners are pretty good at navigating rooms, but they still get stuck in tight spaces. New work takes a step toward giving them the smarts they’ll need to escape the bathroom.
Image processing technique explained
Machine Learning Research

Preserving Detail in Image Inputs: Better image compression for computer vision datasets

Given real-world constraints on memory and processing time, images are often downsampled before they’re fed into a neural network. But the process removes fine details, and that degrades accuracy. A new technique squeezes images with less compromise.
Graphs related to double descent
Machine Learning Research

Moderating the ML Roller Coaster: A technique to avoid double descent in AI

Wait a minute — we added training data, and our model’s performance got worse?! New research offers a way to avoid so-called double descent.
Information and data related to the meta-algorithm AutoML-Zero
Machine Learning Research

Beyond Neural Architecture Search: AutoML-Zero is a meta-algorithm for classification.

Faced with a classification task, an important step is to browse the catalog of machine learning architectures to find a good performer. Researchers are exploring ways to do it automatically.
Data and graphs related to batch normalization
Machine Learning Research

Outside the Norm: Batch normalization contributes to neural network accuracy.

Batch normalization is a technique that normalizes layer outputs to accelerate neural network training. But new research shows that it has other effects that may be more important.
Data and graphs related to equations that optimize some training parameters.
Machine Learning Research

Optimize Your Training Parameters: Research on finding a neural net's optimal batch size

Last week we reported on a formula to determine model width and dataset size for optimal performance. A new paper contributes equations that optimize some training parameters.
Robotic hand identifying transparent objects
Machine Learning Research

Seeing the See-Through: ClearGrasp allows robots to grab see-through objects.

Glass bottles and crystal bowls bend light in strange ways. Image processing networks often struggle to separate the boundaries of transparent objects from the background that shows through them. A new method sees such items more accurately.
Data related to model that predicts molecules that are structurally unrelated to known antibiotics
Machine Learning Research

Deep Learning Finds New Antibiotic: Researchers used AI to identify a promising new antibiotic.

Chemists typically develop new antibiotics by testing close chemical relatives of tried-and-true compounds like penicillin. That approach becomes less effective, though, as dangerous bacteria evolve resistance to those very chemical structures. Instead, researchers enlisted neural networks.
Graphs related to ImageNet error landscape
Machine Learning Research

Rightsizing Neural Nets: An equation for predicting optimal data and model size

How much data do we want? More! How large should the model be? Bigger! How much more and how much bigger? New research estimates the impact of dataset and model sizes on neural network performance.
Graph related to imple Contrastive Learning (SimCLR)
Machine Learning Research

Self-Supervised Simplicity: Image classification with simple contrastive learning (SimCLR)

A simple linear classifier paired with a self-supervised feature extractor outperformed a supervised deep learning model on ImageNet, according to new research.
Info about radioactive data
Machine Learning Research

X Marks the Dataset: Radioacive data helps trace a model's training corpus.

Which dataset was used to train a given model? A new method makes it possible to see traces of the training corpus in a model’s output.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox