Few-shot Learning

9 Posts

Word cloud, chess positions given to the model as text and chart with % of suggested chess moves
Few-shot Learning

Toward Next-Gen Language Models: New Benchmarks Test the Limits of Large Language Models

A new benchmark aims to raise the bar for large language models. Researchers at 132 institutions worldwide introduced the Beyond the Imitation Game benchmark (BIG-bench), which includes tasks that humans perform well but current state-of-the-art models don’t.
Animation showing example questions and answers obtained by a pretrained language model
Few-shot Learning

Ask Me in a Different Way: Prompt Engineering Improves Few-Shot Learning Results

Pretrained language models like GPT-3 have shown notable proficiency in few-shot learning. Given a prompt that includes a few example questions and answers (the shots) plus an unanswered question (the task), such models can generate an accurate answer.
Series of images showing some of the findings of the new study by researchers at Stanford’s Human AI Institute
Few-shot Learning

Weak Foundations Make Weak Models: Foundation AI Models Pass Flaws to Fine-Tuned Variants

A new study examines a major strain of recent research: huge models pretrained on immense quantities of uncurated, unlabeled data and then fine-tuned on a smaller, curated corpus.
Few-shot Learning with a Universal Template (FLUTE)
Few-shot Learning

Pattern for Efficient Learning: A training method for few-shot learning in computer vision.

Getting high accuracy out of a classifier trained on a small number of examples is tricky. You might train the model on several large-scale datasets prior to few-shot training, but what if the few-shot dataset includes novel classes? A new method performs well even in that case.
Libel-detection system from CaliberAI.
Few-shot Learning

Double Check for Defamation: CaliberAI uses NLP to scan for possible legal defamation.

A libel-detection system could help news outlets and social media companies stay out of legal hot water. CaliberAI, an Irish startup, scans text for statements that could be considered defamatory, Wired reported.
Graphs and data related to visualized tokens (or vokens)
Few-shot Learning

Better Language Through Vision: Study improved Bert performance using visual tokens.

For children, associating a word with a picture that illustrates it helps them learn the word’s meaning. Research aims to do something similar for machine learning models. Researchers improved a BERT model’s performance on some language tasks by training it on a large dataset of image-word pairs.
Series of pictures of people smiling
Few-shot Learning

Deepfakes for Good: Tencent on the commercial value of deepfakes

A strategy manifesto from one of China’s biggest tech companies declares, amid familiar visions of ubiquitous AI, that deepfakes are more boon than bane.
Data related to few-shot learning
Few-shot Learning

Small Data the Simple Way: A training technique that can outperform few-shot learning

Few-shot learning seeks to build models that adapt to novel tasks based on small numbers of training examples. This sort of learning typically involves complicated techniques, but researchers achieved state-of-the-art results using a simpler approach.
Packing robot
Few-shot Learning

Packing Robots Get a Grip: This robot arm can handle over 10,000 different objects.

Robots are moving into a job that traditionally required the human touch.What’s new: A commercial warehouse that ships electrical supplies deployed AI-driven robotic arms from Covariant, a high-profile Silicon Valley robotics firm.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox