Illustration showing a witch cooking a copy of the Mona Lisa wearing a witch hat)
Transformer

Artistry Is Obsolete: Is AI Making Human Artists Obsolete?

Is human creativity being replaced by the synthetic equivalent? The fear: AI is cranking out increasingly sophisticated visual, musical, and literary works. AI-generated media will flood the market, squeezing out human artists and depriving the world of their creativity.
Halloween family portrait showing the inheritance of some spooky characteristics
Transformer

New Models Inherit Old Flaws: AI Models May Inherit Flaws From Previous Systems

Is AI becoming inbred? The fear: The best models increasingly are fine-tuned versions of a small number of so-called foundation models that were pretrained on immense quantities of data scraped from the web.
Animated video showing an image generator imitating brush strokes
Transformer

Different Strokes for Robot Folks: Transformer-Based Image Generator Imitates Painters

A neural network can make a photo resemble a painting via neural style transfer, but it can also learn to reproduce an image by applying brush strokes. A new method taught a system this painterly skill without any training data.
Animated image showing the transformer architecture of processing an image
Transformer

Transformer Speed-Up Sped Up: How to Speed Up Image Transformers

The transformer architecture is notoriously inefficient when processing long sequences — a problem in processing images, which are essentially long sequences of pixels. One way around this is to break up input images and process the pieces
Animations that shows how the Google Search Algorithm works with Multimodal AI
Transformer

Search Goes Multimodal: Google Upgrades its Search Algorithm with Multimodal AI

Google will upgrade its search engine with a new model that tracks the relationships between words, images, and, in time, videos — the first fruit of its latest research into multimodal machine learning and multilingual language modeling.
Animation showing gMLP, a simple architecture that performed some language and vision tasks as well as transformers
Transformer

Perceptrons Are All You Need: Google Brain's Multi-Layer Perceptron Rivals Transformers

The paper that introduced the transformer famously declared, “Attention is all you need.” To the contrary, new work shows you may not need transformer-style attention at all.What’s new: Hanxiao Liu and colleagues at Google
Animation showing example questions and answers obtained by a pretrained language model
Transformer

Ask Me in a Different Way: Prompt Engineering Improves Few-Shot Learning Results

Pretrained language models like GPT-3 have shown notable proficiency in few-shot learning. Given a prompt that includes a few example questions and answers (the shots) plus an unanswered question (the task), such models can generate an accurate answer.
Series of images showing some of the findings of the new study by researchers at Stanford’s Human AI Institute
Transformer

Weak Foundations Make Weak Models: Foundation AI Models Pass Flaws to Fine-Tuned Variants

A new study examines a major strain of recent research: huge models pretrained on immense quantities of uncurated, unlabeled data and then fine-tuned on a smaller, curated corpus.
Graph showing Expire-span which enables attention to ignore tokens that aren’t useful to the task at hand
Transformer

Sharper Attention: NLP transformer technique for more Efficient token usage.

Self-attention enables transformer networks to track relationships between distant tokens — such as text characters — in long sequences, but the computational resources required grow quadratically with input size.
Frozen Pretrained Transformer (FPT) explained
Transformer

Transformers: Smarter Than You Think

The transformer architecture has shown an uncanny ability to model not only language but also images and proteins. New research found that it can apply what it learns from the first domain to the others.
Image showing how object detectors work
Transformer

I Know It When I See It

Object detectors typically detect only items that were labeled in their training data. A new method liberates them to locate and recognize a much wider variety of objects.
AI generated videos and VideoGPT training pipeline
Transformer

Synthetic Videos on the Double

Using a neural network to generate realistic videos takes a lot of computation. New work performs the task efficiently enough to run on a beefy personal computer.
Architecture of vision-language tasks
Transformer

One Model for Vision-Language

Researchers have proposed task-agnostic architectures for image classification tasks and language tasks. New work proposes a single architecture for vision-language tasks.
Protein structures
Transformer

What AI Knows About Proteins

Transformer models trained on sequences of amino acids that form proteins have had success classifying and generating viable sequences. New research shows that they also capture information about protein structure.
Animation showing a AI's metaphorical transition to using green energy.
Transformer

Greener Machine Learning: Techniques for Reducing the Carbon Footprint of NLP Models

A new study suggests tactics for machine learning engineers to cut their carbon emissions. Led by David Patterson, researchers at Google and UC Berkeley found that AI developers can shrink a model’s carbon footprint a thousand-fold by streamlining architecture...

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox