Animated image showing different Zillow listings
Language

Price Prediction Turns Perilous: How Covid Broke Zillow's Pricing Algorithm

The real-estate website Zillow bought and sold homes based on prices estimated by an algorithm — until Covid-19 confounded the model’s predictive power. Zillow, whose core business is providing real-estate information for prospective buyers, shut down its house-flipping division after...
A graph shows the cost in dollars of training large natural language processing models.
Language

Who Can Afford to Train AI?: Cost of AI is Too Expensive for Many Small Companies

The cost of training top-performing machine learning models has grown beyond the reach of smaller companies.
Example comparing a nonaugmented model (left) to a model with internet-augmentation (right)
Language

This Chatbot Does Its Research: Facebook Chatbot Uses the Internet to Inform its Answers

Chatbots often respond to human input with incorrect or nonsensical answers. Why not enable them to search for helpful information?
Animation showing GPT-3 in full action
Language

GPT-3 for All: GPT-3 NLP Model is Available for Select Azure Users

Microsoft is making GPT-3 available to selected customers through its Azure cloud service.
First image showing the Google Tensor chip. Second image showing the Google Pixel 6 phone
Language

Competition Heats Up in Mobile AI: Google Designed Its Own Tensor AI Chip for Smartphones

Google designed its own AI chip for its new smartphone — a snub to Qualcomm, the dominant chip vendor in Android phones. What’s new: Google debuted the Tensor chip last week
Animation showing how MERLOT is able to match contextualized captions with their corresponding video frames
Language

Richer Video Representations: Pretraining Method Improves AI's Ability to Understand Video

To understand a movie scene, viewers often must remember or infer previous events and extrapolate potential consequences. New work improved a model’s ability to do the same.
Illustration showing a witch cooking a copy of the Mona Lisa wearing a witch hat)
Language

Artistry Is Obsolete: Is AI Making Human Artists Obsolete?

Is human creativity being replaced by the synthetic equivalent? The fear: AI is cranking out increasingly sophisticated visual, musical, and literary works. AI-generated media will flood the market, squeezing out human artists and depriving the world of their creativity.
Halloween family portrait showing the inheritance of some spooky characteristics
Language

New Models Inherit Old Flaws: AI Models May Inherit Flaws From Previous Systems

Is AI becoming inbred? The fear: The best models increasingly are fine-tuned versions of a small number of so-called foundation models that were pretrained on immense quantities of data scraped from the web.
Illustration of Thumbzilla destroying a city and shooting lightning from its mouth (T-Rex with Facebook thumbs up)
Language

Don’t Be Evil: What if AI Enables Corporations to Become Truly Evil?

Tech companies generally try to be (or to appear to be) socially responsible. Would some rather let AI’s negative impacts slide?
Series of example of accurate and inaccurate matching images to text
Language

Crawl the Web, Absorb the Bias: NLP Models Absorb Biases from Web Training Data

The emerging generation of trillion-parameter models needs datasets of billions of examples, but the most readily available source of examples on that scale — the web — is polluted with bias and antisocial expressions. A new study examines the issue.
GIF showing an orchestra playing music
Language

Roll Over, Beethoven: AI Completes Beethoven's 10th Symphony

Ludwig van Beethoven died before he completed what would have been his tenth and final symphony. A team of computer scientists and music scholars approximated the music that might have been.
Animation showing gMLP, a simple architecture that performed some language and vision tasks as well as transformers
Language

Perceptrons Are All You Need: Google Brain's Multi-Layer Perceptron Rivals Transformers

The paper that introduced the transformer famously declared, “Attention is all you need.” To the contrary, new work shows you may not need transformer-style attention at all.What’s new: Hanxiao Liu and colleagues at Google
Series of images showing some of the findings of the new study by researchers at Stanford’s Human AI Institute
Language

Weak Foundations Make Weak Models: Foundation AI Models Pass Flaws to Fine-Tuned Variants

A new study examines a major strain of recent research: huge models pretrained on immense quantities of uncurated, unlabeled data and then fine-tuned on a smaller, curated corpus.
Graph showing Expire-span which enables attention to ignore tokens that aren’t useful to the task at hand
Language

Sharper Attention: NLP transformer technique for more Efficient token usage.

Self-attention enables transformer networks to track relationships between distant tokens — such as text characters — in long sequences, but the computational resources required grow quadratically with input size.
Frozen Pretrained Transformer (FPT) explained
Language

Transformers: Smarter Than You Think

The transformer architecture has shown an uncanny ability to model not only language but also images and proteins. New research found that it can apply what it learns from the first domain to the others.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox