Graph showing Expire-span which enables attention to ignore tokens that aren’t useful to the task at hand
Transformer

Sharper Attention: NLP transformer technique for more Efficient token usage.

Self-attention enables transformer networks to track relationships between distant tokens — such as text characters — in long sequences, but the computational resources required grow quadratically with input size.
Frozen Pretrained Transformer (FPT) explained
Transformer

Transformers Are Smarter Than You Think: Language transformers can do math, vision, and logic.

The transformer architecture has shown an uncanny ability to model not only language but also images and proteins. New research found that it can apply what it learns from the first domain to the others.
Image showing how object detectors work
Transformer

I Know It When I See It: Zero-shot detection for objects not in training data.

Object detectors typically detect only items that were labeled in their training data. A new method liberates them to locate and recognize a much wider variety of objects.
AI generated videos and VideoGPT training pipeline
Transformer

Synthetic Videos on the Double: VideoGPT is an efficient generative AI system for video.

Using a neural network to generate realistic videos takes a lot of computation. New work performs the task efficiently enough to run on a beefy personal computer.
Architecture of vision-language tasks
Transformer

One Model for Vision-Language: A general purpose AI for vision and language tasks.

Researchers have proposed task-agnostic architectures for image classification tasks and language tasks. New work proposes a single architecture for vision-language tasks.
Protein structures
Transformer

What AI Knows About Proteins: NLP systems can be used to code amino acids.

Transformer models trained on sequences of amino acids that form proteins have had success classifying and generating viable sequences. New research shows that they also capture information about protein structure.
Animation showing a AI's metaphorical transition to using green energy.
Transformer

Greener Machine Learning: Here's how AI models can shrink their carbon footprints.

A new study suggests tactics for machine learning engineers to cut their carbon emissions. Led by David Patterson, researchers at Google and UC Berkeley found that AI developers can shrink a model’s carbon footprint a thousand-fold by streamlining architecture...
A generative adversarial network (GAN)
Transformer

Image Generation Transformed: New research combines GANs with transformers.

A recent generative adversarial network (GAN) produced more coherent images using modified transformers that replaced fully connected layers with convolutional layers. A new GAN achieved a similar end using transformers in their original form.
CogView home website
Transformer

Large Language Models for Chinese: A brief overview of the Wu Dao NLP models.

Researchers unveiled competition for the reigning large language model GPT-3. Four models collectively called Wu Dao were described by Beijing Academy of Artificial Intelligence, a research collective funded by the Chinese government, according to Synced Review.
Examples of image generators using GANsformer
Transformer

Attention for Image Generation: Combining GANs and transformers for more believable images.

Attention quantifies how each part of one input affects the various parts of another. Researchers added a step that reverses this comparison to produce more convincing images.
Commercial about The Trevor Lifeline
Transformer

Chatbots Against Depression: The Trevor Project used GPT-2 to train crisis counselors.

A language model is helping crisis-intervention volunteers practice their suicide-prevention skills. The Trevor Project, a nonprofit organization that operates a 24-hour hotline for LGBTQ youth, uses a “crisis contact simulator” to train its staff in how to talk with troubled teenagers.
Graph showing information about different transformer models
Transformer

Transformer Variants Head to Head: A benchmark for comparing different AI transformers.

The transformer architecture has inspired a plethora of variations. Yet researchers have used a patchwork of metrics to evaluate their performance, making them hard to compare. New work aims to level the playing field.
System Oscar+ working
Transformer

Sharper Eyes For Vision+Language: AI research shows improved image and text matching.

Models that interpret the interplay of words and images tend to be trained on richer bodies of text than images. Recent research worked toward giving such models a more balanced knowledge of the two domains.
Different graphs showing switch transformer data
Transformer

Bigger, Faster Transformers: Increasing parameters without slowing down transformers

Performance in language tasks rises with the size of the model — yet, as a model’s parameter count rises, so does the time it takes to render output. New work pumps up the number of parameters without slowing down the network.
Series of images showing improvements in a multilingual language translator
Transformer

Better Zero-Shot Translations: A method for improving transformer NLP translation

Train a multilingual language translator to translate between Spanish and English and between English and German, and it may be able to translate directly between Spanish and German as well. New work proposes a simple path to better machine translation between languages.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox