A flowchart shows how a jury learning method reduces annotator bias in machine learning models.
Transformer

Choose the Right Annotators: Jury-Learning Helps Remove Bias from NLP Models

A new machine learning method attempts to account for biases that may be held by certain subsets of labelers.
Humanized Training for Robot Arms
Transformer

Humanized Training for Robot Arms: New Research Improves Robot Performance and Adaptability

Robots trained via reinforcement learning usually study videos of robots performing the task at hand. A new approach used videos of humans to pre-train robotic arms.
A series of graphs show the carbon emissions associated with training AI models.
Transformer

Cutting the Carbon Cost of Training: A New Tool Helps NLP Models Lower Their Gas Emissions

You can reduce your model’s carbon emissions by being choosy about when and where you train it.
Different images generated by DALL·E
Transformer

Text-to-Image Goes Viral: Inside Craiyon, Formerly Known as DALL-E Mini

A homebrew re-creation of OpenAI’s DALL·E model is the latest internet sensation. Craiyon has been generating around 50,000 user-prompted images daily, thanks to its ability to produce visual mashups like Darth Vader ice fishing and photorealistic Pokemon characters.
Graph Transformer with positional encoding
Transformer

A Transformer for Graphs: New Method for Processing Graph Data with Transformers

Transformers can learn a lot from sequential data like words in a book, but they’ve shown limited ability to learn from data in the form of a graph. A new transformer variant gives graphs due attention.
DeepNet Graph Layers vs years
Transformer

Pile on the Layers!: DeepNorm Allows Transformers to Accommodate More Layers

Adding layers to a neural network puts the “deep” in deep learning, but it also increases the chance that the network will get stuck during training. A new approach effectively trains transformers with an order of magnitude more layers than previous methods.
Word cloud, chess positions given to the model as text and chart with % of suggested chess moves
Transformer

Toward Next-Gen Language Models: New Benchmarks Test the Limits of Large Language Models

A new benchmark aims to raise the bar for large language models. Researchers at 132 institutions worldwide introduced the Beyond the Imitation Game benchmark (BIG-bench), which includes tasks that humans perform well but current state-of-the-art models don’t.
Example of text generated by LaMDA
Transformer

LaMDA Comes Alive?: Google Engineer Says LaMDA AI is Sentient

A chatbot persuaded at least one person that it has feelings. A senior engineer at Google announced his belief that the company’s latest conversational language model is sentient.
Examples of Dall-E searches
Transformer

DALL·E 2’s Emergent Vocabulary: The text-to- image generator DALL·E 2 invents its own words and concepts

OpenAI’s text-to-image generator DALL·E 2 produces pictures with uncanny creativity on demand. Has it invented its own language as well? Ask DALL·E 2 to generate an image that includes text, and often its output will include seemingly random characters.
Contentedge screen video capture
Transformer

Winning The Google Game: 14 Companies Using GPT-3 to Top SEO

AI startups are helping writers tailor articles that appear near the top of Google’s search results. At least 14 companies sell access to software that uses GPT-3, the language model from OpenAI, to generate headlines, product descriptions, blog posts, and video scripts.
Illustration of a robot with a captain costume
Transformer

Neural Networks: Find the Function — A Basic Introduction to Neural Networks

Let’s get this out of the way: A brain is not a cluster of graphics processing units, and if it were, it would run software far more complex than the typical artificial neural network. Yet neural networks were inspired by the brain’s architecture.
Gato’s performance on simulated control tasks | Image captions generated by Gato
Transformer

One Model, Hundreds of Tasks: Multimodal Transformer Performs Over 600 Different Tasks

Researchers took a step toward achieving a longstanding goal: One model that performs a whole lot of very different tasks. Scott Reed, Konrad Żołna, Emilio Parisotto and a team at DeepMind announced Gato.
Architecture of CXV
Transformer

Upgrade for Vision Transformers: Improved Efficiency for Vision Transformers

Vision Transformer and models like it use a lot of computation and memory when processing images. New work modifies these architectures to run more efficiently while adopting helpful properties from convolutions.
Graph Average across 14 NLP Tasks parameters versus Average Accuracy
Transformer

GPT-Free: Meta Releases Open Source Large Language Models OPT

Itching to get your hands on a fully trained large language model? The wait is over. Meta introduced the OPT family of transformer-based language models with nearly unfettered access to source code and trained weights.
Shifted Patch Tokenization (SPT) | Locality Self-Attention (LSA)
Transformer

Less Data for Vision Transformers: Boosting Vision Transformer Performance with Less Data

Vision Transformer (ViT) outperformed convolutional neural networks in image classification, but it required more training data. New work enabled ViT and its variants to outperform other architectures with less training data.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox