The U.S. federal government released a plan to develop technical standards for artificial intelligence, seeking to balance its aim to maintain the nation’s tech leadership and economic power with a priority on AI safety and trustworthiness.

What’s new: Responding to a February executive order, the National Institute of Standards and Technology issued its roadmap for developing AI standards that would guide federal policy and applications. The 46-page document seeks to foster standards strict enough to prevent harm but flexible enough to drive innovation.

What it says: The plan describes a broad effort to standardize in areas as disparate as terminology and user interfaces, benchmarking and risk management. It calls for coordination among public agencies, institutions, businesses, and foreign countries, emphasizing the need to develop trustworthy AI systems that are accurate, reliable, secure, and transparent.

Yes, but: The authors acknowledge the risk of trying to corral such a dynamic enterprise. In fact, they admit that aren’t entirely sure how to go about it. “While there is broad agreement that these issues must factor into US standards,” they write, “it is not clear how that should be done and whether there is yet sufficient scientific and technical basis to develop those standards provisions.

We’re thinking: NIST’s plan drives a stake in the ground for equity, inclusivity, and cooperation. Here’s hoping it can bake those values — which is not to say specific implementations — into the national tech infrastructure and spread them abroad.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox