Machine Learning Research

238 Posts

Illustration of the multiresolution hash encoding in 2D
Machine Learning Research

Novel Views of 3D Scenes — Pronto: Using NeRF Algorithms to Quickly Generate New 3D Views

Given a number of images of the same scene, a neural network can synthesize images from novel vantage points, but it can take hours to train. A new approach cuts training time to a few minutes.
Charts showing benchmark on medium-sized datasets
Machine Learning Research

When Trees Outdo Neural Networks: Decision Trees Perform Best on Most Tabular Data

While neural networks perform well on image, text, and audio datasets, they fall behind decision trees and their variations for tabular datasets. New research looked into why.
Network architecture of Reasoner
Machine Learning Research

What the Missing Frames Showed: Machine Learning Describes Masked Video Events

Neural networks can describe in words what’s happening in pictures and videos — but can they make sensible guesses about things that happened before or will happen afterward? Researchers probed this ability.
Dependency between compute budget and number of parameters
Machine Learning Research

Right-Sizing Models for the Dataset: Finding the Best Data-To-Parameter Ratio for NLP Models

The route to improving transformer-based language models like GPT-3 and Gopher, which are trained on immense quantities of text scraped from the web, has been to increase their size. But research shows that, given a processing budget, bigger doesn’t necessarily mean better.
Plot demonstrating the relative sizes of parallel and monolingual examples
Machine Learning Research

Massively Multilingual Translation: Machine Learning Model Trained to Translate 1,000 Languages

Recent work showed that models for multilingual machine translation can increase the number of languages they translate by scraping the web for pairs of equivalent sentences in different languages. A new study radically expanded the language repertoire through training on untranslated web text.
Technical components of No Language Left Behind and how they fit together
Machine Learning Research

Massively Multilingual Translation: NLP Model Translates 200 Different Languages

Sentence pairs that have equivalent meanings in different languages — typically used to train machine translation systems — have been available in sufficient quantities for only around 100 languages. New work doubled that number and produced a more capable model.
Example of a video produced from a story-like description
Machine Learning Research

Long-Form Videos from Text Stories: Google's Phenaki Generates Long-Form Video from Text

Only a week ago, researchers unveiled a system that generates a few seconds of video based on a text prompt. New work enables a text-to-video system to produce an entire visual narrative from several sentences of text.
Illustration of the Dialogue Transformer Language Model (DLM)
Machine Learning Research

The Sound of Conversation: AI Learns to Mimic Conversational Pauses and Interruptions

In spoken conversation, people naturally take turns amid interjections and other patterns that aren’t strictly verbal. A new approach generated natural-sounding audio dialogs without training on text transcriptions that mark when one party should stop speaking and the other should chime in.
Panda on a swing
Machine Learning Research

Text to Video Without Text-Video Training Data: Make-A-Video, an AI System from Meta, Generates Video from Text

Text-to-image generators like DALL·E 2, Midjourney, and Stable Diffusion are winning art contests and worrying artists. A new approach brings the magic of text-to-image generation to video.
Animation showing 3 main types of data augmentation and random cropping of a picture
Machine Learning Research

Cookbook for Vision Transformers: A Formula for Training Vision Transformers

Vision Transformers (ViTs) are overtaking convolutional neural networks (CNN) in many vision tasks, but procedures for training them are still tailored for CNNs. New research investigated how various training ingredients affect ViT performance.
Animated overview of PP-Matting
Machine Learning Research

Automating Mattes for Visual Effects: New ML Method Produces Image Mattes Easier

Researchers at Baidu introduced PP-Matting, an architecture that, given an image, estimates the transparency of pixels surrounding foreground objects to create mattes without requiring additional input.
Information related to Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model (SERAC)
Machine Learning Research

Update Any Language Model: New Method to Update Pretrained Language Models

The ability to update language models is essential to incorporate new information and correct undesirable behaviors. Previous methods are unwieldy and often fail as the amount of new data increases. New work offers a workaround.
Illustration shows different self-attention mechanisms used by Transformer-based AI models.
Machine Learning Research

Attention to Rows and Columns: Altering Transformers' Self-Attention Mechanism for Greater Efficiency

A new approach alters transformers' self-attention mechanism to balance computational efficiency with performance on vision tasks.
Object-Detection Transformers Simplified: New Research Improves Object Detection With Vision Transformers
Machine Learning Research

Object-Detection Transformers Simplified: New Research Improves Object Detection With Vision Transformers

ViTDet, a new system from Facebook, adds an object detector to a plain pretrained transformer.
Overall architecture of GEM.
Machine Learning Research

What a Molecule’s Structure Reveals: Baidu Creates AI to Classify Molecular Properties

The authors trained a modified GNN on a dataset of 18 million molecules to find molecular properties.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox