When Models are Confident — and Wrong Language models like ChatGPT need a way to express degrees of confidence.

Published
Reading time
1 min read
Question asked by Andrew Ng and answered by the latest version of ChatGPT

Dear friends,

One of the dangers of large language models (LLMs) is that they can confidently make assertions that are blatantly false. This raises worries that they will flood the world with misinformation. If they could moderate their degree of confidence appropriately, they would be less likely to mislead.

People are prone to following authority figures. Because a lot of text on the internet is written in an authoritative style — hopefully because the authors know what they’re talking about— LLMs have learned to mimic this style. Unfortunately, LLMs can speak in this style even when they get the facts completely wrong.

We don’t expect people to be right all the time, but we don’t like it when they’re simultaneously confident and wrong. Real experts speak in a range of styles: confident when we know what we’re talking about, but also explaining the boundaries of our knowledge when we run up against them and helping the audience understand the range of possibilities. For example, when asked how to build an AI application, I might propose one approach but also describe the range of algorithms one might consider. Knowing what you know and don’t know is a useful trait of expertise.

Playing with ChatGPT, the latest language model from OpenAI, I found it to be an impressive advance from its predecessor GPT-3. Occasionally it says it can’t answer a question. This is a great step! But, like other LLMs, it can be hilariously wrong. Work lies ahead to build systems that can express different degrees of confidence.

For example, a model like Meta’s Atlas or DeepMind’s RETRO that synthesizes multiple articles into one answer might infer a degree of confidence based on the reputations of the sources it draws from and the agreement among them, and then change its communication style accordingly. Pure LLMs and other architectures may need other solutions.

If we can get generative algorithms to express doubt when they’re not sure they’re right, it will go a long way toward building trust and ameliorating the risk of generating misinformation.

Keep learning!

Andrew

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox