United States lawmakers are getting a crash course in AI.
What’s new: Chuck Schumer, the majority leader in the U.S. Senate, announced an unusual plan to educate legislators who are crafting AI regulations, The New York Times reported. It could lead to legislation “within months,” he said.
How it works: The senator calls his program SAFE Innovation, an acronym for four regulatory priorities: security, accountability, foundations, and explain [sic].
- The SAFE’s centerpiece is a series of nonpartisan listening sessions with industry executives, researchers, and civil rights activists, set to kick off later this year.
- The framework seeks to illuminate fundamental questions such as how to ensure safety, security, and accountability without hindering innovation, which is a linchpin in social, economic, and geopolitical priorities; the centralized versus distributed authority over AI; the relative roles of taxation and subsidies; and the optimal balance between protecting proprietary developments and encouraging open technology.
- The plan aims to encourage politicians from both major U.S. parties to craft legislation jointly.
Behind the news: Schumer’s move reflects growing interest in regulating AI among U.S. lawmakers.
- Representatives of both parties introduced a bill that would create a 20-member commission to develop guidelines for further legislation. Meanwhile, a Senate subcommittee recently probed the technology’s risks and opportunities in a hearing attended by executives at IBM and OpenAI as well as cognitive scientist and AI critic Gary Marcus, and the White House met with leaders of Google, Microsoft, OpenAI, and the startup Anthropic.
- Ten U.S. states and several local jurisdictions have enacted AI-related restrictions such as bans on police use of face recognition or New York City’s law, set to take effect in July, that will penalize employers who use automated hiring software.
- In October 2022, the Biden administration released an AI Bill of Rights that focuses on five key themes: safety and effectiveness, personal privacy, protection against algorithmic discrimination, disclosure of impact on users, and human alternatives to AI.
Yes, but: Any proposal must overcome fundamental disagreements between the two major parties, especially over whether a new, dedicated agency should oversee AI or whether that can be left to existing agencies. Moreover, some observers worry that Schumer’s deliberative approach could slow down legislative efforts that are already underway.
Why it matters: Thoughtful AI regulations must strike a delicate balance between encouraging innovation and protecting the public. It’s imperative that lawmakers — few of whom have a background in technology or science — understand the nuances.
We’re thinking: U.S. politics are increasingly divided. Bipartisan listening sessions on AI may serve a dual goal of educating lawmakers and uniting them around a shared vision.