A consortium of top AI experts proposed concrete steps to help machine learning engineers secure the public’s trust.

What’s new: Dozens of researchers and technologists recommended actions to counter public skepticism toward artificial intelligence, fueled by issues like data privacy and accidents caused by autonomous vehicles. The co-authors include scholars at universities like Cambridge and Stanford; researchers at companies including Intel, Google, and OpenAI; and representatives of nonprofits such as the Partnership on AI and Center for Security and Emerging Technology.

Recommendations: Lofty pronouncements about ethics aren’t enough, the authors declare. Like the airline industry, machine learning engineers must build an “infrastructure of technologies, norms, laws, and institutions” the public can depend on. The authors suggest 10 trust-building moves that fall into three categories.

  • Institutional mechanisms such as third-party auditing to verify the accuracy of company claims and bounties to researchers who discover flaws in AI systems.
  • Software mechanisms that make it easier to understand how a given algorithm works or capture information about a program’s development and deployment for subsequent auditing.
  • Hardware mechanisms that protect data privacy, along with subsidies for computing power for academic researchers who may lack resources to evaluate what large-scale AI systems are doing.

Behind the news: The AI community is searching for ways to boost public trust amid rising worries about surveillance, the impact of automation on human labor, autonomous weapons, and computer-generated disinformation. Dozens of organizations have published their own principles, from Google and Microsoft to the European Commission and the Vatican. Even the U.S. Department of Defense published guidelines on using AI during warfare.

Why it matters: Widespread distrust in AI could undermine the great good this technology can do, frightening people away or prompting politicians to hamstring research and deployment.

We’re thinking: Setting clear standards and processes to verify claims about AI systems offers a path for regulators and users to demand evidence before they will trust an AI system. This document’s emphasis on auditing, explainability, and access to hardware makes a solid cornerstone for further efforts.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox