BERT

33 Posts

Animated chart shows how AI can avoid mistaking an image's subject for its context.
BERT

Taming Spurious Correlations: New Technique Helps AI Avoid Classification Mistakes

When a neural network learns image labels, it may confuse a background item for the labeled object. New research avoids such mistakes.
2 min read
A flowchart shows how a jury learning method reduces annotator bias in machine learning models.
BERT

Choose the Right Annotators

A new machine learning method attempts to account for biases that may be held by certain subsets of labelers.
2 min read
A series of graphs show the carbon emissions associated with training AI models.
BERT

Cutting the Carbon Cost of Training: A New Tool Helps AI Developers Lower Their Greenhouse Gas Emissions

You can reduce your model’s carbon emissions by being choosy about when and where you train it.
2 min read
smaller town bigger tree
BERT

Trillions of Parameters: Are AI Models With Trillions of Parameters the New Normal?

The trend toward ever-larger models crossed the threshold from immense to ginormous. Google kicked off 2021 with Switch Transformer, the first published work to exceed a trillion parameters, weighing in at 1.6 trillion.
2 min read
Animations that shows how the Google Search Algorithm works with Multimodal AI
BERT

Search Goes Multimodal: Google Upgrades its Search Algorithm with Multimodal AI

Google will upgrade its search engine with a new model that tracks the relationships between words, images, and, in time, videos — the first fruit of its latest research into multimodal machine learning and multilingual language modeling.
2 min read
Animation showing gMLP, a simple architecture that performed some language and vision tasks as well as transformers
BERT

Perceptrons Are All You Need: Google Brain's Multi-Layer Perceptron Rivals Transformers

The paper that introduced the transformer famously declared, “Attention is all you need.” To the contrary, new work shows you may not need transformer-style attention at all.What’s new: Hanxiao Liu and colleagues at Google
2 min read
Series of images showing some of the findings of the new study by researchers at Stanford’s Human AI Institute
BERT

Weak Foundations Make Weak Models: Foundation AI Models Pass Flaws to Fine-Tuned Variants

A new study examines a major strain of recent research: huge models pretrained on immense quantities of uncurated, unlabeled data and then fine-tuned on a smaller, curated corpus.
2 min read
Architecture of vision-language tasks
BERT

One Model for Vision-Language

Researchers have proposed task-agnostic architectures for image classification tasks and language tasks. New work proposes a single architecture for vision-language tasks.
2 min read
Libel-detection system working
BERT

Double Check for Defamation

A libel-detection system could help news outlets and social media companies stay out of legal hot water. CaliberAI, an Irish startup, scans text for statements that could be considered defamatory, Wired reported.
2 min read
CogView home website
BERT

Large Language Models for Chinese

Researchers unveiled competition for the reigning large language model GPT-3. Four models collectively called Wu Dao were described by Beijing Academy of Artificial Intelligence, a research collective funded by the Chinese government, according to Synced Review.
2 min read
Tag-Retrieve-Compose-Synthesize (TReCS)
BERT

Pictures From Words and Gestures

A new system combines verbal descriptions and crude lines to visualize complex scenes. Google researchers led by Jing Yu Koh proposed Tag-Retrieve-Compose-Synthesize (TReCS), a system that generates photorealistic images by describing what they want to see while mousing around on a blank screen.
2 min read
Margaret Mitchell, Marian Croak and Timnit Gebru pictured
BERT

Google Overhauls Ethical AI Team

Having dismissed two key researchers, Google restructured its efforts in AI ethics. Marian Croak, an accomplished software engineer and vice president of engineering at Google, will lead a new center of expertise in responsible AI, the company announced.
2 min read
System Oscar+ working
BERT

Sharper Eyes For Vision+Language

Models that interpret the interplay of words and images tend to be trained on richer bodies of text than images. Recent research worked toward giving such models a more balanced knowledge of the two domains.
2 min read
Graphs and data related to visualized tokens (or vokens)
BERT

Better Language Through Vision

For children, associating a word with a picture that illustrates it helps them learn the word’s meaning. Research aims to do something similar for machine learning models. Researchers improved a BERT model’s performance on some language tasks by training it on a large dataset of image-word pairs.
2 min read
Data related to adversarial learning
BERT

Adversarial Helper

Models that learn relationships between images and words are gaining a higher profile. New research shows that adversarial learning, usually a way to make models robust to deliberately misleading inputs, can boost vision-and-language performance.
2 min read

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox