Published
Reading time
1 min read
Arxiv logo

As deep learning becomes more resource-intensive, labs with better funding tend to achieve better results. One consequence is that less wealthy organizations often can’t replicate state-of-the-art successes. Some observers are calling it a crisis.

What’s new: Members of the deep learning community are asking researchers to be more forthright about the hardware, software, and computing power they used to achieve their results, according to Wired. That could help other researchers economize in seeking to replicate them.

How it works: NeurIPS asks that authors submitting papers to its December conference include a reproducibility checklist.

  • Submissions must provide clearly written descriptions of algorithms, mathematical underpinnings, and models. Also, how much memory they needed, how much data they crunched, and — crucially — the number of times they ran the model. A link to the source code, too.
  • Theoretical claims must include a list of assumptions, and the authors must publish the complete proof.
  • Authors must back up figures and tables with either an open data set or a simulation.

Behind the news: In 2005, Stanford professor John Ioannidis published the landmark paper, “Why Most Published Research Findings Are False.” The work pointed out that science in many disciplines — particularly social psychology and medicine — relies on foundational research that hasn’t been replicated. Many observers fear that AI could fall into the same trap.

Why it matters: Science rests on hypotheses confirmed by experiments that yield consistent results every time they’re performed. AI is making rapid progress, but building on results that haven’t been verified puts that momentum at risk.

We’re thinking: In the natural sciences, unverified results fuel skeptics of anthropogenic climate change, who appeal to uncertainty to avoid decisive action on the greatest challenge of our time. Maintaining the highest scientific standards in AI is the best protection against critics who might take advantage of this issue to impede progress in the field.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox