A proposed European Union law that seeks to control AI is raising questions about what kinds of systems it would regulate.
What's new: Experts at a roundtable staged by the Center for Data Innovation debated the implications of limitations in the EU’s forthcoming Artificial Intelligence Act.
The controversy: The legislation is in the final stages of revision and moving toward a vote next year. As EU parliamentarians worked to finalize the proposed language, the French delegation introduced the term “general-purpose AI,” which is described as any system that can “perform generally applicable functions such as image/speech recognition, audio/video generation, pattern-detection, question-answering, translation, etc., and is able to have multiple intended and unintended purposes.” Providers of general-purpose AI would be required to assess foreseeable misuse, perform regular audits, and register their systems in an EU-wide database. The proposal has prompted worries that the term’s vagueness could hinder AI development.
The discussion: The roundtable’s participants were drawn from a variety of companies, nongovernmental organizations, and government agencies. They generally agreed that the proposed definition of general-purpose AI was too broad and vague. The consequences, they warned, could include criminalizing AI development and weakening protection against potential abuses.
- Anthony Aguirre, strategist at the Future of Life Institute, noted that “general-purpose AI” has meanings beyond those that the proposed law delineates.
- Kai Zenner, advisor to German EU parliamentarian Axel Voss, expressed concern over the law’s potential impact on open-source development. He argued that it would make anyone who worked on an open-source model legally responsible for its impact, destroying the trust essential to building such software.
- Alexandra Belias, DeepMind’s international public policy manager, recommended augmenting the definition with criteria, like the range of tasks a model can perform.
- Irene Solaiman, policy director at HuggingFace, said the proposed definition fails to account for potential future capabilities and misuses. She suggested that regulators classify AI systems according to their use cases to see where they might fit into existing laws.
- Andrea Miotti, head of policy at Conjecture, an AI research lab, suggested using terms more commonly used and better understood by the AI community, such as “foundation models.” He also said the law focused too tightly on limiting system providers rather than protecting users.
Behind the news: Initially proposed in 2021, the AI Act would sort AI systems into three risk levels. Applications with unacceptable risk, such as social-credit systems and real-time face recognition, would be banned outright. High-risk applications, such as applications that interact with biometric data, would face heightened scrutiny including a mandated risk-management system. The law would allow unfettered use of AI in applications in the lowest risk level, such as spam filters or video games.
Why it matters: The AI Act, like the EU’s General Data Protection Regulation of 2018, likely will have consequences far beyond the union’s member states. Regulators must thread the needle between overly broad wording, which risks stifling innovation and raising development costs, and narrow language that leaves openings for serious abuse.
We're thinking: The definition of AI has evolved over the years, and it has never been easy to pin down. Once, an algorithm for finding the shortest path between two nodes in a graph (the A* algorithm) was cutting-edge AI. Today many practitioners view it as a standard part of any navigation system. Given the challenge of defining general-purpose AI — never mind AI itself! — it would be more fruitful to regulate specific outcomes (such as what AI should and shouldn't do in specific applications) rather than try to control the technology itself.