Retrieval Enhanced Transformer (RETRO)

2 Posts

Illustration of a person shoveling snow with the help of a flamethrower
Retrieval Enhanced Transformer (RETRO)

Language Models, Extended: Large language models grew more reliable and less biased in 2022.

Researchers pushed the boundaries of language models to address persistent problems of trustworthiness, bias, and updatability.
Two images showing RETRO Architecture and Gopher (280B) vs State of the Art
Retrieval Enhanced Transformer (RETRO)

Large Language Models Shrink: Gopher and RETRO prove lean language models can push boundaries.

DeepMind released three papers that push the boundaries — and examine the issues — of large language models.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox