AI Firms Agree to Voluntary Guidelines U.S. companies agree to uphold a list of responsible AI commitments.

Published
Reading time
2 min read
AI Firms Agree to Voluntary Guidelines: U.S. companies agree to uphold a list of responsible AI commitments.

In the absence of nationwide laws that regulate AI, major U.S. tech companies pledged to abide by voluntary guidelines — most of which they may already be following.

What’s new: Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI agreed to uphold a list of responsible-AI commitments, the White House announced.

How it works: President Biden, Vice President Harris, and other administration officials formulated the terms of the agreement in consultation with tech leaders. The provisions fall into three categories:

  • Safety: The companies pledged to allow independent experts to test their AI systems before release and to share information about safety issues and potential vulnerabilities with governments, academia, and civil society.
  • Security: They promised to invest in cybersecurity, especially to protect proprietary model weights, and to enable users to report vulnerabilities.
  • Trust: The companies vowed to publicly report their models’ capabilities, limitations, and risks; to prioritize research into their potential social harms; and to develop systems to meet “society’s greatest challenges” such as climate change. They also promised to develop methods, such as watermarks, that identify generated output.

Behind the news: The surge of generative AI has spurred calls to regulate the technology. The rising chorus has given companies ample incentive to accept voluntary limits while trying to shape forthcoming mandates.

  • United Nations Secretary-General António Guterres backed a proposal to establish an international organization to establish governing principles for AI, akin to the International Atomic Energy Agency.
  • In June, the European Parliament passed a draft of the AI Act, moving the European Union legislation closer to becoming law. The draft, which is still undergoing revision, would designate generative AI applications as “high-risk” and subject them to regular audits and government oversight.
  • In January, the Chinese government issued rules that require labeling generated media and prohibit output that creates false information or threatens national security.

Yes, but: The commitments — with the exception of watermarking generated output — are relatively easy to fulfill, and some companies may be able to say that they already fulfill them. For instance, many established companies employ independent parties to test for safety and security, and some publish papers that describe risks of their AI research. Leaders in the field already discuss limitations, work to reduce risks, and launch initiatives that address major societal problems. Moreover, the agreement lacks ways to determine whether companies have kept their promises and hold shirkers to account.

Why it matters: Although some U.S. cities and states regulate AI in piecemeal fashion, the country lacks overarching national legislation. Voluntary guidelines, if companies observe them in good faith and avoid hidden pitfalls, could ease the pressure to assert top-down control over the ways the technology is developed and deployed.

We’re thinking: These commitments are a step toward guiding AI forward in ways that maximize benefits and minimize harms — even if some companies already fulfill them. Nonetheless, laws are necessary to ensure that AI’s benefits are spread far and wide throughout the world. Important work remains to craft such laws, and they’ll be more effective if the AI community participates in crafting them.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox