Transformer

127 Posts

Dataset FOLIO example based on the Wild Turkey Wikipedia page
Transformer

Language Models Defy Logic: Large NLP models struggle with logical reasoning.

Who would disagree that, if all people are mortal and Socrates is a person, Socrates must be mortal? GPT-3, for one. Recent work shows that bigger language models are not necessarily better when it comes to logical reasoning.
Screen captures of the Sparrow Chatbot
Transformer

Google’s Rule-Respecting Chatbot: Research helps AI chatbots be more truthful and less hateful.

Amid speculation about the threat posed by OpenAI’s ChatGPT chatbot to Google’s search business, a paper shows how the search giant might address the tendency of such models to produce offensive, incoherent, or untruthful dialog.
High-level overview of the STEGO architecture at train and prediction steps
Transformer

Segmented Images, No Labeled Data: Improved unsupervised learning for semantic segmentation

Training a model to separate the objects in a picture typically requires labeled images for best results. Recent work upped the ante for training without labels.
Outline of the text-embedding and inversion process.
Transformer

Precision-Guided Image Generation: Better text-to-image results with latent diffusion

Typical text-to-image generators can generate pictures of a cat, but not your cat. That’s because it’s hard to describe in a text prompt precisely all the things that distinguish your pet from other members of the same species.
Illustration of a person shoveling snow with the help of a flamethrower
Transformer

Language Models; Extended: Language models grew more reliable and less biased in 2022.

Researchers pushed the boundaries of language models to address persistent problems of trustworthiness, bias, and updatability.
Illustration of a snowman with a top hat and glasses
Transformer

AI's Eyes Evolve: Vision transformer research exploded in 2022.

Work on vision transformers exploded in 2022. Researchers published well over 17,000 ViT papers during the year. A major theme: combining self-attention and convolution.
Diagram explaining Atlas, a retrieval-augmented language model that exhibits strong few-shot performance on knowledge tasks
Transformer

Memorize Less; Retrieve More: How small language models can perform specialized tasks.

Large language models are trained only to predict the next word based on previous ones. Yet, given a modest fine-tuning set, they acquire enough information to learn how to perform tasks such as answering questions.
Ground truth video of a road on the left and predicted video with MaskViT on the right
Transformer

Seeing What Comes Next: Transformers predict future video frames.

If a robot can predict what it’s likely to see next, it may have a better basis for choosing an appropriate action — but it has to predict quickly. Transformers, for all their utility in computer vision, aren’t well suited to this because of their steep computational and memory requirements...
Network architecture of Reasoner
Transformer

What the Missing Frames Showed: Machine Learning Describes Masked Video Events

Neural networks can describe in words what’s happening in pictures and videos — but can they make sensible guesses about things that happened before or will happen afterward? Researchers probed this ability.
Dependency between compute budget and number of parameters
Transformer

Right-Sizing Models for the Dataset: Finding the Best Data-To-Parameter Ratio for NLP Models

The route to improving transformer-based language models like GPT-3 and Gopher, which are trained on immense quantities of text scraped from the web, has been to increase their size. But research shows that, given a processing budget, bigger doesn’t necessarily mean better.
Plot demonstrating the relative sizes of parallel and monolingual examples
Transformer

Massively Multilingual Translation: Machine Learning Model Trained to Translate 1,000 Languages

Recent work showed that models for multilingual machine translation can increase the number of languages they translate by scraping the web for pairs of equivalent sentences in different languages. A new study radically expanded the language repertoire through training on untranslated web text.
Technical components of No Language Left Behind and how they fit together
Transformer

Massively Multilingual Translation: NLP Model Translates 200 Different Languages

Sentence pairs that have equivalent meanings in different languages — typically used to train machine translation systems — have been available in sufficient quantities for only around 100 languages. New work doubled that number and produced a more capable model.
Example of a video produced from a story-like description
Transformer

Long-Form Videos from Text Stories: Google's Phenaki Generates Long-Form Video from Text

Only a week ago, researchers unveiled a system that generates a few seconds of video based on a text prompt. New work enables a text-to-video system to produce an entire visual narrative from several sentences of text.
Illustration of the Dialogue Transformer Language Model (DLM)
Transformer

The Sound of Conversation: AI Learns to Mimic Conversational Pauses and Interruptions

In spoken conversation, people naturally take turns amid interjections and other patterns that aren’t strictly verbal. A new approach generated natural-sounding audio dialogs without training on text transcriptions that mark when one party should stop speaking and the other should chime in.
Panda on a swing
Transformer

Text to Video Without Text-Video Training Data: Make-A-Video, an AI System from Meta, Generates Video from Text

Text-to-image generators like DALL·E 2, Midjourney, and Stable Diffusion are winning art contests and worrying artists. A new approach brings the magic of text-to-image generation to video.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox