Benchmarks

17 Posts

Sample-Efficient Training for Robots: Reinforcement learning from human feedback to train robots
Benchmarks

Sample-Efficient Training for Robots: Reinforcement learning from human feedback to train robots

Training an agent that controls a robot arm to perform a task — say, opening a door — that involves a sequence of motions (reach, grasp, turn, pull, release) can take from tens of thousands to millions of examples...
Charts showing benchmark on medium-sized datasets
Benchmarks

When Trees Outdo Neural Networks: Decision Trees Perform Best on Most Tabular Data

While neural networks perform well on image, text, and audio datasets, they fall behind decision trees and their variations for tabular datasets. New research looked into why.
Humanized Training for Robot Arms
Benchmarks

Humanized Training for Robot Arms: New Research Improves Robot Performance and Adaptability

Robots trained via reinforcement learning usually study videos of robots performing the task at hand. A new approach used videos of humans to pre-train robotic arms.
Word cloud, chess positions given to the model as text and chart with % of suggested chess moves
Benchmarks

Toward Next-Gen Language Models: New Benchmarks Test the Limits of Large Language Models

A new benchmark aims to raise the bar for large language models. Researchers at 132 institutions worldwide introduced the Beyond the Imitation Game benchmark (BIG-bench), which includes tasks that humans perform well but current state-of-the-art models don’t.
Excerpts from the fifth annual AI Index from Stanford University’s Institute for Human-Centered AI
Benchmarks

AI Progress Report: Stanford University's fifth annual AI Report for 2022

A new study showcases AI’s growing importance worldwide. What’s new: The fifth annual AI Index from Stanford University’s Institute for Human-Centered AI documents rises in funding, regulation, and performance.
Graph showing information about different transformer models
Benchmarks

Transformer Variants Head to Head: A benchmark for comparing different AI transformers.

The transformer architecture has inspired a plethora of variations. Yet researchers have used a patchwork of metrics to evaluate their performance, making them hard to compare. New work aims to level the playing field.
Data showing information related to AI strategy status in OECD countries
Benchmarks

Computation as a National Resource: An effort to estimate computing capacity for 37 nations.

How much processing power do various nations have on hand to drive their AI strategy? An international trade group aims to find out. The Organisation for Economic Co-operation and Development (OECD) is launching an effort to measure the computing capacity available in countries around the world.
Animations depicting benchmarking, datasets and best practices
Benchmarks

Prosperity of the Commons: Tools from MLCommons for improved model development

A new consortium of companies, schools, and research labs is building open tools for next-generation machine learning. MLCommons aims to foster innovation in machine learning by developing new benchmarks, datasets, and best practices.
Screen captures of online platform Dynabench
Benchmarks

Dynamic Benchmarks: A platform for fooling language models

Benchmarks provide a scientific basis for evaluating model performance, but they don’t necessarily map well to human cognitive abilities. Facebook aims to close the gap through a dynamic benchmarking method that keeps humans in the loop.
Bert (muppet) and information related to BERT (transformer-based machine learning technique)
Benchmarks

Do Muppets Have Common Sense?: The Bert NLP model scores high on common sense test.

Two years after it pointed a new direction for language models, Bert still hovers near the top of several natural language processing leaderboards. A new study considers whether Bert simply excels at tracking word order or or learns something closer to common sense.
Graphs related to a comparison and evaluation of 14 different optimizers
Benchmarks

Optimizer Shootout: An evaluation of 14 deep learning optimizers

Everyone has a favorite optimization method, but it’s not always clear which one works best in a given situation. New research aims to establish a set of benchmarks. Researchers evaluated 14 popular optimizers using the Deep Optimization Benchmark Suite some of them introduced last year.
Graphs and data related to AI chips
Benchmarks

Built for Speed: Nvidia topped MLPerf's training benchmarks in 2020.

Chips specially designed for AI are becoming much faster at training neural networks, judging from recent trials. MLPerf, an organization that’s developing standards for hardware performance in machine learning tasks, released results from its third benchmark competition.
Hamster running in a hamster ball
Benchmarks

Running Fast, Standing Still: Some state of the art machine learning progress is illusory.

Machine learning researchers report better and better results, but some of that progress may be illusory. Some models that appear to set a new state of the art haven’t been compared properly to their predecessors, Science News reports based on several published surveys.
A chatbot called Meena and a graph comparing it with other chatbot services
Benchmarks

Toward Open-Domain Chatbots: Meena Scores High on System for Grading NLP Chatbots

Progress in language models is spawning a new breed of chatbots and, unlike their narrow-domain forebears, they have the gift of gab. Recent research tests the limits of conversational AI.
Graph related to Language Model Analysis (LAMA)
Benchmarks

What Language Models Know

Watson set a high bar for language understanding in 2011, when it famously whipped human competitors in the televised trivia game show Jeopardy! IBM’s special-purpose AI required around $1 billion. Research suggests that today’s best language models can accomplish similar tasks right off the shelf.
Load More

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox