Language

198 Posts

Language Models Defy Logic: Large NLP models struggle with logical reasoning.
Language

Language Models Defy Logic: Large NLP models struggle with logical reasoning.

Who would disagree that, if all people are mortal and Socrates is a person, Socrates must be mortal? GPT-3, for one. Recent work shows that bigger language models are not necessarily better when it comes to logical reasoning.
Open water tap providing sentences instead of water
Language

Generate Articles, Publish Errors: CNET pauses its practice of writing news articles with AI.

A prominent tech-news website generated controversy (and mistakes) by publishing articles written by AI. CNET suspended its practice of publishing articles produced by a text-generation model following news reports that exposed the articles’ authorship.
Screen captures of the Sparrow Chatbot
Language

Google’s Rule-Respecting Chatbot: Research helps AI chatbots be more truthful and less hateful.

Amid speculation about the threat posed by OpenAI’s ChatGPT chatbot to Google’s search business, a paper shows how the search giant might address the tendency of such models to produce offensive, incoherent, or untruthful dialog.
Screen capture of KokoBot having a conversation with a patient
Language

Bot Therapy and Informed Consent: Discord's Kokobot triggers an ethics controversy.

An experiment in using chatbots to dispense mental-health counseling raised questions about ethics.
Participant responses (Likert-scale) to post-survey questions about belief about OpenAI's Codex
Language

Generated Code Generates Overconfident Coders: Copilot AI tool encourages programmers to write buggy code.

Tools that automatically write computer code may make their human users overconfident that the programs are bug-free. Stanford University researchers found that programmers who used OpenAI’s Codex, a model that generates computer code, were more likely...
3 graphs showing projections of data usage. Each one shows two extrapolations of data usage.
Language

Will We Have Enough Data?

The world’s supply of data soon may fail to meet the demands of increasingly hungry machine learning models. Researchers at Epoch AI found that a shortage of text data could cause trouble as early as this year. Vision data may fall short within a decade.
Douwe Kiela with a l
Language

Douwe Kiela: Less Hype, More Caution

This year we really started to see the mainstreaming of AI. Systems like Stable Diffusion and ChatGPT captured the public imagination to an extent we haven’t seen before in our field.
Illustration of a person shoveling snow with the help of a flamethrower
Language

Language Models; Extended: Language models grew more reliable and less biased in 2022.

Researchers pushed the boundaries of language models to address persistent problems of trustworthiness, bias, and updatability.
Illustration of The Grinch's hands coding on a tablet
Language

Programmer’s Best Friend: Code generation services took off in 2022.

Behind schedule on a software project? There’s an app for that. Language models fine-tuned on computer code proved capable of generating software routines similar to the work of experienced developers — though the results can be hit-or-miss.
Diagram explaining Atlas, a retrieval-augmented language model that exhibits strong few-shot performance on knowledge tasks
Language

Memorize Less; Retrieve More: How small language models can perform specialized tasks.

Large language models are trained only to predict the next word based on previous ones. Yet, given a modest fine-tuning set, they acquire enough information to learn how to perform tasks such as answering questions.
Different screenshots of Create with Alexa feature displayed on a tablet
Language

How Alexa Says Goodnight: Amazon Echo uses generative AI to create bedtime stories.

Too exhausted (or unimaginative) to tell your child a bedtime story? Amazon’s smart displays can spin bespoke tales on demand. A feature called Create with Alexa generates children’s stories complete with illustrations, music, and sound effects on the Amazon Echo Show device.
List of ChatGPT's examples, capabilities and limitations
Language

More Plausible Text, Familiar Failings: ChatGPT hasn’t overcome the weaknesses of other large language models

Members of the AI community tested the limits of the ChatGPT chatbot, unleashing an avalanche of tweets that made for sometimes-great, sometimes-troubling entertainment.
Image of body parts in Hokkien, map showing Hokkien speaking regions across the world and Model architecture of S2ST
Language

Translating a Mostly Oral Language: How Meta Trained an NLP Model to Translate Hokkein

Most speech-to-speech translation systems use text as an intermediate mode. So how do you build an automated translator for a language that has no standard written form? A new approach trained neural networks to translate a primarily oral language.
Charts showing performance scaling of TPU vs GPU on MLPerf-Transformer and Cerebras GPT-3 XL Performance
Language

Built to Scale: Andromeda Supercomputer from Cerebras Speeds up AI

A new computing cluster delivers more bang per chip. Cerebras unveiled Andromeda, a supercomputer based on its processors. Unlike conventional clusters, the system’s processing speed rises linearly with additional processors.
Dependency between compute budget and number of parameters
Language

Right-Sizing Models for the Dataset: Finding the Best Data-To-Parameter Ratio for NLP Models

The route to improving transformer-based language models like GPT-3 and Gopher, which are trained on immense quantities of text scraped from the web, has been to increase their size. But research shows that, given a processing budget, bigger doesn’t necessarily mean better.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox