One person pouring a drink of poison in the company of another person
NLP

Logistic Regression: Follow the Curve — A Basic Introduction to Logistic Regression for Machine Learning

There was a moment when logistic regression was used to classify just one thing: If you drink a vial of poison, are you likely to be labeled “living” or “deceased”? Times have changed.
Graph Average across 14 NLP Tasks parameters versus Average Accuracy
NLP

GPT-Free: Meta Releases Open Source Large Language Models OPT

Itching to get your hands on a fully trained large language model? The wait is over. Meta introduced the OPT family of transformer-based language models with nearly unfettered access to source code and trained weights.
Shifted Patch Tokenization (SPT) | Locality Self-Attention (LSA)
NLP

Less Data for Vision Transformers: Better Performing Vision Transformers with Less Data

Vision Transformer (ViT) outperformed convolutional neural networks in image classification, but it required more training data. New work enabled ViT and its variants to outperform other architectures with less training data.
GLaM model architecture
NLP

Efficiency Experts: Google's GLaM Model uses MoE to Improve NLP Efficiency

The emerging generation of trillion-parameter language models take significant computation to train. Activating only a portion of the network at a time can cut the requirement dramatically and still achieve exceptional results.
Jurassic-X's software infrastructure
NLP

Neural Nets + Rules = Truer Text: NLP System Jurassic-X Fact-Checks Itself and Does Math

A new approach aims to cure text generators of their tendency to produce nonsense. AI21 Labs launched Jurassic-X, a natural language processing system that combines neural networks and rule-based programs.
Evolutionary Model of Variant Effect (EVE)
NLP

Spot the Bad Mutation: NLP System Spots DNA Mutations Associated With Disease

Every gene in the human genome exists in a variety of mutations, and some encode protein variants that cause cells to malfunction, resulting in illness. Yet which mutations are associated with disease is largely unknown.
Transformer Architecture
NLP

Transformers See in 3D: Generating 3D Images Using Transformers

Visual robots typically perceive the three-dimensional world through sequences of two-dimensional images, but they don’t always know what they’re looking at. For instance, Tesla’s self-driving system has been known to mistake a full moon for a traffic light.
Illustration: Board game pieces and puzzle pieces
NLP

How to Keep Up in a Changing Field

Machine learning changes fast. Take natural language processing. Word2vec, introduced in 2013, quickly replaced one-hot encoding with word embeddings. Transformers revolutionized the field in 2017 by parallelizing the previously sequential training process.
An illustration shows a cozy cabin where all the furniture is made out of coffee mugs.
NLP

Transformers Take Over: Transformers Applied to Vision, Language, Video, and More

In 2021, transformers were harnessed to discover drugs, recognize speech, and paint pictures — and much more.
A graph shows the cost in dollars of training large natural language processing models.
NLP

Who Can Afford to Train AI?: Cost of AI is Too Expensive for Many Small Companies

The cost of training top-performing machine learning models has grown beyond the reach of smaller companies.
Animation showing GPT-3 in full action
NLP

GPT-3 for All: GPT-3 NLP Model is Available for Select Azure Users

Microsoft is making GPT-3 available to selected customers through its Azure cloud service.
Animations that shows how the Google Search Algorithm works with Multimodal AI
NLP

Search Goes Multimodal: Google Upgrades its Search Algorithm with Multimodal AI

Google will upgrade its search engine with a new model that tracks the relationships between words, images, and, in time, videos — the first fruit of its latest research into multimodal machine learning and multilingual language modeling.
Series of example of accurate and inaccurate matching images to text
NLP

Crawl the Web, Absorb the Bias: NLP Models Absorb Biases from Web Training Data

The emerging generation of trillion-parameter models needs datasets of billions of examples, but the most readily available source of examples on that scale — the web — is polluted with bias and antisocial expressions. A new study examines the issue.
Graph showing Expire-span which enables attention to ignore tokens that aren’t useful to the task at hand
NLP

Sharper Attention: NLP Transformer Technique for More Efficient Token Usage

Self-attention enables transformer networks to track relationships between distant tokens — such as text characters — in long sequences, but the computational resources required grow quadratically with input size.
Animated video showing a system to interpret electrical impulses from the brain as words
NLP

Listening to the Brain: NLP System Translates a Man's Brain Activity Into Words

Neural networks translated a paralyzed man’s brainwaves into conversational phrases. Researchers trained a system to interpret electrical impulses from the brain of a man who had lost the ability to speak 15 years ago, and displayed them as words on a video screen.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox