University of North Carolina

2 Posts

More Factual LLMs: FactTune, a method to fine-tune LLMs for factual accuracy without human feedback
University of North Carolina

More Factual LLMs: FactTune, a method to fine-tune LLMs for factual accuracy without human feedback

Large language models sometimes generate false statements. New work makes them more likely to produce factual output.
Screen captures of online platform Dynabench
University of North Carolina

Dynamic Benchmarks: A platform for fooling language models

Benchmarks provide a scientific basis for evaluating model performance, but they don’t necessarily map well to human cognitive abilities. Facebook aims to close the gap through a dynamic benchmarking method that keeps humans in the loop.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox