Machine Learning Research

319 Posts

Interpreting Image Edit Instructions: Meta’s Emu Edit improves text-to-image generation with task classification.
Machine Learning Research

Interpreting Image Edit Instructions: Meta’s Emu Edit improves text-to-image generation with task classification.

The latest text-to-image generators can alter images in response to a text prompt, but their outputs often don’t accurately reflect the text. They do better if, in addition to a prompt, they’re told the general type of alteration they’re expected to make.
Brain-Controlled Robots Get More Versatile: NOIR, a system to control robots via electroencephalogram for everyday tasks
Machine Learning Research

Brain-Controlled Robots Get More Versatile: NOIR, a system to control robots via electroencephalogram for everyday tasks

Brain-to-computer interfaces that enable users to control robots with their thoughts typically execute a single type of task such as reaching and grasping. Researchers designed a system that responds to a variety of intentions.
Deja Vu, an algorithm that accelerates inferencing of large language models
Machine Learning Research

Streamlined Inference: Deja Vu, a method that boosts LLM speed by activating only essential neural parts

It’s not necessary to activate all parts of a large language model to process a given input. Using only the necessary parts saves processing.
Predicting Scientific Discoveries: AI predicts scientific breakthroughs using social graphs
Machine Learning Research

Predicting Scientific Discoveries: AI predicts scientific breakthroughs using social graphs

A new AI method directs scientists toward promising avenues of inquiry.
Stable Video 3D (SV3D)
Machine Learning Research

A 3D Model From One 2D Image: Neural Radiance Field (NeRF), a method to generate a 3D model from a single image

Video diffusion provides a new basis for generating 3D models.
Tuning LLMs for Better RAG: Meta’s RA-DIT boosts language model output by optimizing text retrieval
Machine Learning Research

Tuning LLMs for Better RAG: Meta’s RA-DIT boosts language model output by optimizing text retrieval

Retrieval-augmented generation (RAG) enables large language models to generate better output by retrieving documents that are relevant to a user’s prompt. Fine-tuning further improves RAG performance.
A Transformer Alternative Emerges: Mamba, a new approach that may outperform transformers
Machine Learning Research

A Transformer Alternative Emerges: Mamba, a new approach that may outperform transformers

An architectural innovation improves upon transformers — up to 2 billion parameters, at least...
More Factual LLMs: FactTune, a method to fine-tune LLMs for factual accuracy without human feedback
Machine Learning Research

More Factual LLMs: FactTune, a method to fine-tune LLMs for factual accuracy without human feedback

Large language models sometimes generate false statements. New work makes them more likely to produce factual output.
Robo-Football From Simulation to Reality: Reinforcement learning powers humanoid robots to play football
Machine Learning Research

Robo-Football From Simulation to Reality: Reinforcement learning powers humanoid robots to play football

Humanoid robots can play football (known as soccer in the United States) in the real world, thanks to reinforcement learning.
Cutting the Cost of Pretrained Models: FrugalGPT, a method to cut AI costs and maintain quality
Machine Learning Research

Cutting the Cost of Pretrained Models: FrugalGPT, a method to cut AI costs and maintain quality

Research aims to help users select large language models that minimize expenses while maintaining quality.
Toward Safer, More Helpful Models
Machine Learning Research

Toward Safer, More Helpful Models

The technique known as reinforcement learning from human feedback fine-tunes large language models to be helpful and avoid generating harmful responses such as suggesting illegal or dangerous activities. An alternative method streamlines this approach and achieves better results.
Learning Language by Exploration: Agent develops language skills through simulated exploration tasks
Machine Learning Research

Learning Language by Exploration: Agent develops language skills through simulated exploration tasks

Machine learning models typically learn language by training on tasks like predicting the next word in a given text. Researchers trained a language model in a less focused, more human-like way.
Schooling Language Models in Math: GOAT (Good at Arithmetic Tasks), a method to boost large language models' arithmetic abilities
Machine Learning Research

Schooling Language Models in Math: GOAT (Good at Arithmetic Tasks), a method to boost large language models' arithmetic abilities

Large language models are not good at math. Researchers devised a way to make them better. Tiedong Liu and Bryan Kian Hsiang Low at the National University of Singapore proposed a method to fine-tune large language models for arithmetic tasks.
Human Feedback Without Reinforcement Learning: Direct Preference Optimization (DPO) fine-tunes pretrained large language models on human preferences without the cumbersome step of reinforcement learning.
Machine Learning Research

Human Feedback Without Reinforcement Learning: Direct Preference Optimization (DPO) fine-tunes pretrained large language models on human preferences without the cumbersome step of reinforcement learning.

Reinforcement learning from human feedback (RLHF) is widely used to fine-tune pretrained models to deliver outputs that align with human preferences. New work aligns pretrained models without the cumbersome step of reinforcement learning.
Swiss Army LLM
Machine Learning Research

Swiss Army LLM

The combination of  language models that are equipped for retrieval augmented generation can retrieve text from a database to improve their output. Further work extends this capability to retrieve information from any application that comes with an API. 
Load More

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox