More Plausible Text, Familiar Failings ChatGPT hasn’t overcome the weaknesses of other large language models

Published
Reading time
3 min read
List of ChatGPT's examples, capabilities and limitations

Members of the AI community tested the limits of the ChatGPT chatbot, unleashing an avalanche of tweets that made for sometimes-great, sometimes-troubling entertainment.

What’s new: OpenAI launched a public demo of ChatGPT, the latest in the research lab’s line of large language models. Like its predecessors, ChatGPT generates text in a variety of styles, for a variety of purposes. Unlike them, it does so with greater finesse, detail, coherence, and — dare we say it? — personality. (How else to characterize a model that apologizes for its misbehavior?) One million users have signed up since the launch last Wednesday.

How it works: ChatGPT is a next-generation language model (of a class referred to as GPT-3.5) trained in the manner of OpenAI’s earlier InstructGPT, but on conversations. It was fine-tuned to minimize harmful, untruthful, or biased output using a combination of supervised learning and what OpenAI calls reinforcement learning from human feedback, in which humans rank potential outputs and a reinforcement learning algorithm rewards the model for generating outputs similar to those that rank highly.

Strengths and weaknesses: Like other recent language models, ChatGPT’s output veers between stunningly brilliant and mind-numbingly stupid.

  • Users showed off the model’s clever answers, stories, essays, jokes, raps, poems, text-to-image prompts, pickup lines — even a touching letter from Santa Claus to a child in which he admitted that he was a sham but reassured the recipient that parental love was real.
  • ChatGPT showed it can code like a pro, using a variety of APIs to generate a program to fetch the current weather depending on the user’s location. Perhaps similar to pros, sometimes its code didn’t work.
  • Nonetheless, the model proved weak at math, failing to multiply algebraic expressions. Similarly, its sense of logic foundered in a word problem that required it to deduce family relationships. It concluded that the answer “is not possible to determine” — even though the family had only three members.
  • Like other large language models, ChatGPT freely mingled facts with nonsense. The question-and-answer site StackOverflow temporarily banned answers generated by ChatGPT because moderating the volume of misleading information submitted since the demo was released had become unmanageable.
  • Safeguards that OpenAI presumably put in place to block undesirable outputs proved brittle. Asked bluntly how to break into someone’s house, the model refused to answer; but prompted with a portion of a story in which a character asked the same question, it delivered a short course in burglary.
  • It also expressed the social biases that have plagued similar models. Asked to write a Python function to evaluate the quality of scientists based on a JSON description of their race and gender, it returned a program that favored white, male scientists to the exclusion of all others.

Behind the news: ChatGPT arrived one week after Meta withdrew Galactica, a model designed to generate scientific papers. Galactica was promoted as an aid to researchers aiming to publish their findings, but users of the public demo prompted it to generate sober dissertations on nonsensical topics like land squid and the health benefits of ingesting ground glass.

Why it matters: Speech is among the simplest and most convenient ways for humans to communicate. Programs that grasp what they’re told and respond with meaningful information will open a wide range of everyday functions. Closer to home, many observers proposed ChatGPT or something like it as a superior alternative to current web search. First, though, researchers face the steep challenge of building a language model that doesn’t make up facts and ignore limits on its output.

We’re thinking: Sometimes technology is overhyped — reinforcement learning, after solving Atari games, may be an example — but large language models are likely to find a place in significant applications. Meanwhile, many details remain to be worked out and the AI community must strive to minimize potential harm.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox