White House Moves to Regulate AI All about the U.S. executive order on AI use and development

Published
Reading time
3 min read
The White House

U.S. President Biden announced directives that control AI based on his legal power to promote national defense and respond to national emergencies.

What’s new: The White House issued an executive order that requires AI companies and institutions to report and test certain models and directs federal agencies to set standards for AI. The order follows a six-month process of consultation with the AI community and other stakeholders.

How it works: The executive order interprets existing law — specifically the Cold War-era Defense Production Act, a Cold War-era law that gives the president powers to promote national defense and respond to emergencies — and thus can be implemented without further legislation. It focuses on foundation models, or general-purpose models that can be fine-tuned for specific tasks:

  • Safety: Developers must notify the government when they train a model whose processing budget exceeds 1026 integer or floating-point operations, which corresponds roughly to 1 trillion parameters, with a lower limit for training on biological sequences. (These are preliminary values to be updated regularly.) In addition, developers must watermark generated outputs and share results of safety tests conducted by so-called red teams.
  • Privacy: The federal government will support tools to protect users’ privacy and evaluate AI developers’ collection of personal information. The order calls on Congress to pass comprehensive data-privacy legislation, reflecting the president’s limited power in this area.
  • Civil rights: Federal administrators of benefits, contractors, and landlords are barred from using algorithms to discriminate against members of protected groups. The Department of Justice and civil rights offices of various government agencies will set best practices for the use of AI in criminal justice and civil rights investigations.
  • Competitiveness: A new National AI Research Resource will support researchers with processing power, data, tools, and expertise. The Federal Trade Commission will assist small business owners in commercializing AI developments. Immigration authorities will lower barriers to workers with expertise in critical areas like software engineering.
  • Global leadership: The administration will work with other countries and nongovernmental organizations to set international standards for safety and risk management as well as an agenda for applying AI to solve global problems. 

Behind the news: The executive order was long in the making and joins other nations’ moves to limit AI.

  • In May, the White House met with the CEOs of Alphabet, Anthropic, Microsoft, and OpenAI, to consult with those companies and urge them to adopt actions consistent with the administration’s AI Bill of Rights and Risk Management Framework.
  • The following month, President Biden convened a summit with AI researchers and announced a public working group on AI.
  • In July, the White House reached voluntary agreements with 7 AI companies to follow administration guidelines.
  • This week, an international roster of regulators, researchers, businesses, and lobbyists convene for the UK’s global summit on AI safety. China already has imposed restrictions on face recognition and synthetic media, and the European Union’s upcoming AI Act is expected to restrict models and applications deemed high-risk. 

Why it matters: While Europe and China move aggressively to control specific uses and models, the White House seeks to balance innovation against risk, specifically with regard to national defense but also social issues like discrimination and privacy. The executive order organizes the federal bureaucracy to grapple with the challenges of AI and prepares the way for national legislation. 

We’re thinking: We need laws to ensure that AI is safe, fair, and transparent, and the executive order has much good in it. But it’s also problematic in fundamental ways. For instance, foundation models are the wrong focus. Burdening basic technology development with reporting and standards places a drag on innovation. It makes more sense to regulate applications that carry known risks, such as underwriting tools, healthcare devices, and autonomous vehicles. We welcome regulations that promote responsible AI and look forward to legislation that limits risks without hampering innovation.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox