Rising Calls for Regulation Tech CEOs and governments aim for AI laws.

Published
Reading time
2 min read
Rising Calls for Regulation: Tech CEOs and governments aim for AI laws.

Amid growing worries about AI’s power, tech leaders and politicians alike are arguing for regulating the technology.

What’s new: Leaders of OpenAI, Microsoft, and Google spoke publicly in favor of regulation and met privately with world leaders. Meanwhile, national governments proposed new guardrails for generative AI.

Execs rally: Corporate leaders hit the road to spread words of caution.

  • OpenAI CEO Sam Altman embarked on a world tour to express support for new laws including the European Union’s forthcoming AI Act. He called for a global regulatory body to oversee superintelligent machines in an open letter with co-founders Greg Brockman and Ilya Sutskever. Earlier in May, Altman testified in favor of regulating AI before the U.S. Congress.
  • In addition, OpenAI will award 10 grants of $100,000 each to develop AI governance frameworks. The company is considering applications until June 24.
  • Microsoft president Brad Smith echoed Altman’s calls for a U.S. agency to regulate AI.
  • Separately, Google CEO Sundar Pichai agreed to collaborate with European lawmakers to craft an “AI pact,” a set of voluntary rules for developers to follow before EU regulations come into force.

Regulators respond: Several nations took major steps toward regulating AI.

  • At its annual meeting in Japan, the Group of Seven (G7), an informal bloc of industrialized democratic governments, announced the Hiroshima Process, an intergovernmental task force empowered to investigate risks of generative AI. G7 members, which include Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States, vowed to craft mutually compatible laws and regulate AI according to democratic values. These include fairness, accountability, transparency, safety, data privacy, protection from abuse, and respect for human rights.
  • U.S. President Joe Biden issued a strategic plan for AI. The initiative calls on U.S. regulatory agencies to develop public datasets, benchmarks, and standards for training, measuring, and evaluating AI systems.
  • Earlier this month, France’s data privacy regulator announced a framework for regulating generative AI.

Behind the news: China is the only major world power that explicitly regulates generative AI. In March, EU officials rewrote the union’s AI Act, which has not yet been enacted, to classify generative AI models as “high-risk,” which would make them subject to bureaucratic oversight and regular audits.

Why it matters: As generative AI’s capabilities grow, so do worries about its potential pitfalls. Thoughtful regulations and mechanisms for enforcement could bring AI development and application into line with social benefit. As for businesses, well defined guidelines would help them avoid harming the public and damaging their reputations and head off legal restrictions that would block their access to customers.

We’re thinking: Testifying before the U.S. Congress, Sam Altman recommended that startups be regulated more lightly than established companies. Kudos to him for taking that position. The smaller reach of startups means less risk of harm, and hopefully they will grow into incumbents subject to more stringent regulation.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox