Examples of Dall-E searches
Transformer

DALL·E 2’s Emergent Vocabulary: DALL-E 2 Invents its Own Words and Concepts

OpenAI’s text-to-image generator DALL·E 2 produces pictures with uncanny creativity on demand. Has it invented its own language as well? Ask DALL·E 2 to generate an image that includes text, and often its output will include seemingly random characters.
Contentedge screen video capture
Transformer

Winning The Google Game: 14 Companies Using GPT-3 to Top SEO

AI startups are helping writers tailor articles that appear near the top of Google’s search results. At least 14 companies sell access to software that uses GPT-3, the language model from OpenAI, to generate headlines, product descriptions, blog posts, and video scripts.
Illustration of a robot with a captain costume
Transformer

Neural Networks: Find the Function — A Basic Introduction to Neural Networks

Let’s get this out of the way: A brain is not a cluster of graphics processing units, and if it were, it would run software far more complex than the typical artificial neural network. Yet neural networks were inspired by the brain’s architecture.
Gato’s performance on simulated control tasks | Image captions generated by Gato
Transformer

One Model, Hundreds of Tasks: Multimodal Transformer Performs Over 600 Different Tasks

Researchers took a step toward achieving a longstanding goal: One model that performs a whole lot of very different tasks. Scott Reed, Konrad Żołna, Emilio Parisotto and a team at DeepMind announced Gato.
Architecture of CXV
Transformer

Upgrade for Vision Transformers: Improved Efficiency for Vision Transformers

Vision Transformer and models like it use a lot of computation and memory when processing images. New work modifies these architectures to run more efficiently while adopting helpful properties from convolutions.
Graph Average across 14 NLP Tasks parameters versus Average Accuracy
Transformer

GPT-Free: Meta Releases Open Source Large Language Models OPT

Itching to get your hands on a fully trained large language model? The wait is over. Meta introduced the OPT family of transformer-based language models with nearly unfettered access to source code and trained weights.
Shifted Patch Tokenization (SPT) | Locality Self-Attention (LSA)
Transformer

Less Data for Vision Transformers: Boosting Vision Transformer Performance with Less Data

Vision Transformer (ViT) outperformed convolutional neural networks in image classification, but it required more training data. New work enabled ViT and its variants to outperform other architectures with less training data.
GLaM model architecture
Transformer

Efficiency Experts: Mixture of Experts Makes Language Models More Efficient

The emerging generation of trillion-parameter language models take significant computation to train. Activating only a portion of the network at a time can cut the requirement dramatically and still achieve exceptional results.
AI generated images with different descriptions
Transformer

More Realistic Pictures From Text: How the Glide Diffusion Model Generates Images from Text

OpenAI’s DALL·E got an upgrade that takes in text descriptions and produces images in styles from hand-drawn to photorealistic. The new version is a rewrite from the ground up. It uses the earlier CLIP zero-shot image classifier to represent text descriptions.
Jurassic-X's software infrastructure
Transformer

Neural Nets + Rules = Truer Text: Jurassic-X NLP Can Solve Math, Check Facts, and More

A new approach aims to cure text generators of their tendency to produce nonsense. AI21 Labs launched Jurassic-X, a natural language processing system that combines neural networks and rule-based programs.
Deep Symbolic Regression
Transformer

From Sequences to Symbols: Transformers Extend AI's Mathematical Capabilities

Given a sequence of numbers, neural networks have proven adept at discovering a mathematical expression that generates it. New work uses transformers to extend that success to a further class of expressions.
Grokking: A dramatic example of generalization far after overfitting on an algorithmic dataset
Transformer

Learning After Overfitting: Transformers Continue Learning After Overfitting Data

When a model trains too much, it can overfit, or memorize, the training data, which reduces its ability to analyze similar-but-different inputs. But what if training continues? New work found that overfitting isn’t the end of the line.
Nvidia Chip
Transformer

Transformer Accelerator: Nvidia's H100 is Designed to Train Transformers Faster

Is your colossal text generator bogged down in training? Nvidia announced a chip designed to accelerate the transformer architecture, the basis of large language models such as GPT-3.
Diagram with info about AlphaCode
Transformer

Competitive Coder: AI code writing system can compete alongside humans.

Programming is hard. Programming competitions are harder. Yet transformers proved themselves up to the task.
The performance of different downstream (DS)
Transformer

The Limits of Pretraining: More pretraining doesn't guarantee a better fine-tuned AI.

The higher the accuracy of a pretrained model, the better its performance after fine-tuning, right? Not necessarily. Researchers conducted a meta-analysis of image-recognition experiments and performed some of their own.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox