BERT

34 Posts

Dataset FOLIO example based on the Wild Turkey Wikipedia page
BERT

Language Models Defy Logic: Large NLP models struggle with logical reasoning.

Who would disagree that, if all people are mortal and Socrates is a person, Socrates must be mortal? GPT-3, for one. Recent work shows that bigger language models are not necessarily better when it comes to logical reasoning.
Animated chart shows how AI can avoid mistaking an image's subject for its context.
BERT

Taming Spurious Correlations: New Technique Helps AI Avoid Classification Mistakes

When a neural network learns image labels, it may confuse a background item for the labeled object. New research avoids such mistakes.
A flowchart shows how a jury learning method reduces annotator bias in machine learning models.
BERT

Choose the Right Annotators: Jury-Learning Helps Remove Bias from NLP Models

A new machine learning method attempts to account for biases that may be held by certain subsets of labelers.
A series of graphs show the carbon emissions associated with training AI models.
BERT

Cutting the Carbon Cost of Training: A New Tool Helps NLP Models Lower Their Gas Emissions

You can reduce your model’s carbon emissions by being choosy about when and where you train it.
Illustration of giant Christmas tree in a town plaza
BERT

Trillions of Parameters: Are AI models with trillions of parameters the new normal?

The trend toward ever-larger models crossed the threshold from immense to ginormous. Google kicked off 2021 with Switch Transformer, the first published work to exceed a trillion parameters, weighing in at 1.6 trillion.
Animations that shows how the Google Search Algorithm works with Multimodal AI
BERT

Search Goes Multimodal: Google Upgrades its Search Algorithm with Multimodal AI

Google will upgrade its search engine with a new model that tracks the relationships between words, images, and, in time, videos — the first fruit of its latest research into multimodal machine learning and multilingual language modeling.
Animation showing gMLP, a simple architecture that performed some language and vision tasks as well as transformers
BERT

Perceptrons Are All You Need: Google Brain's Multi-Layer Perceptron Rivals Transformers

The paper that introduced the transformer famously declared, “Attention is all you need.” To the contrary, new work shows you may not need transformer-style attention at all.What’s new: Hanxiao Liu and colleagues at Google
Series of images showing some of the findings of the new study by researchers at Stanford’s Human AI Institute
BERT

Weak Foundations Make Weak Models: Foundation AI Models Pass Flaws to Fine-Tuned Variants

A new study examines a major strain of recent research: huge models pretrained on immense quantities of uncurated, unlabeled data and then fine-tuned on a smaller, curated corpus.
Architecture of vision-language tasks
BERT

One Model for Vision-Language: A general purpose AI for vision and language tasks.

Researchers have proposed task-agnostic architectures for image classification tasks and language tasks. New work proposes a single architecture for vision-language tasks.
Libel-detection system from CaliberAI.
BERT

Double Check for Defamation: CaliberAI uses NLP to scan for possible legal defamation.

A libel-detection system could help news outlets and social media companies stay out of legal hot water. CaliberAI, an Irish startup, scans text for statements that could be considered defamatory, Wired reported.
CogView home website
BERT

Large Language Models for Chinese: A brief overview of the WuDao family

Researchers unveiled competition for the reigning large language model GPT-3. Four models collectively called Wu Dao were described by Beijing Academy of Artificial Intelligence, a research collective funded by the Chinese government, according to Synced Review.
Tag-Retrieve-Compose-Synthesize (TReCS)
BERT

Pictures From Words and Gestures: AI model generates captions as users mouse over images.

A new system combines verbal descriptions and crude lines to visualize complex scenes. Google researchers led by Jing Yu Koh proposed Tag-Retrieve-Compose-Synthesize (TReCS), a system that generates photorealistic images by describing what they want to see while mousing around on a blank screen.
Margaret Mitchell, Marian Croak and Timnit Gebru pictured
BERT

Google Overhauls Ethical AI Team: What Google is doing after Timnit Gebru's departure.

Having dismissed two key researchers, Google restructured its efforts in AI ethics. Marian Croak, an accomplished software engineer and vice president of engineering at Google, will lead a new center of expertise in responsible AI, the company announced.
System Oscar+ working
BERT

Sharper Eyes For Vision+Language: AI research shows improved image and text matching.

Models that interpret the interplay of words and images tend to be trained on richer bodies of text than images. Recent research worked toward giving such models a more balanced knowledge of the two domains.
Graphs and data related to visualized tokens (or vokens)
BERT

Better Language Through Vision: Study improved Bert performance using visual tokens.

For children, associating a word with a picture that illustrates it helps them learn the word’s meaning. Research aims to do something similar for machine learning models. Researchers improved a BERT model’s performance on some language tasks by training it on a large dataset of image-word pairs.
Load More

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox