In the absence of national laws that specifically regulate AI in the United States, California moved to regulate the technology within its own borders, passing four bills in less than a month.
What’s new: Governor Gavin Newsom signed into law SB 53, which requires large AI developers to disclose their safety protocols. In addition, SB 243 regulates chatbots, AB 316 makes developers liable for the actions of autonomous systems they build, and AB 853 requires AI-generated media to be labeled clearly.
How it works: Together, the bills don’t ban any particular applications outright or restrict AI development, but they require extensive disclosures, either to the state or directly to users. Some took effect immediately while others, such as SB 243, will phase in by January 2027.
- SB 53 requires that developers of frontier models, defined as those whose training requires processing greater than 1026 integer or floating-point operations — a level currently associated with very large and powerful models — provide more transparency about their models’ capabilities and potential risks. It also requires that developers with annual revenue above $500 million publish safety frameworks that show how they follow industry and international standards and assess and mitigate risk. In addition, they must report on their models’ uses and capabilities at release and report any critical safety incidents within 15 days. Noncompliant developers could face fines of up to $1 million. The law protects whistleblowers within AI companies against retaliation and provides anonymous channels to report illegal or unsafe behavior. The bill takes effect in June 2026.
- SB 243 aims to prevent chatbots from harming minors and other vulnerable users. It bars exposing minors to sexual content and requires developers to disclose that chatbots are AI-generated and provide a general warning that chatbots may not be suited for minors. The bill also requires developers to provide specific support to users who discuss suicide or self-harm and to issue an annual report on mental health issues related to using their chatbots.
- AB 316 prohibits defendants in lawsuits from shifting responsibility onto AI systems by claiming that they harmed plaintiffs autonomously. It applies to anyone who develops, modifies, or uses an AI system.
- AB 853 requires that AI-generated media be labeled clearly as such. Furthermore, it requires that all media (AI-generated or not) include information about who made it and how. The bill requires that cameras, audio recorders, computers, and other media-capture devices record such provenance data, and that large-scale media distributors (2,000,000 monthly active users or more) disclose it.
What they’re saying: Reaction among AI developers has been mixed. SB 53 drew the loudest and most widely varied commentary.
- Collin McCune, head of government affairs at the venture capital firm Andreessen Horowitz, said SB 53 puts startups at a disadvantage: “States have an important role in regulating AI. But if lawmakers really want to protect their citizens, this isn’t the way. They should target harmful uses through consumer protection laws and similar safeguards — not dictate how technologists build technology.”
- Chris Lehane at OpenAI opposed California’s approach: “History shows that on issues of economic competitiveness and national security — from railroads to aviation to the internet — America leads best with clear, nationwide rules, not a patchwork of state or local regulations. Fragmented state‑by‑state approaches create friction, duplication, and missed opportunities.”
- Anthropic endorsed SB 53: “We’ve long advocated for thoughtful AI regulation, and our support for this bill comes after careful consideration of the lessons learned from California's previous attempt at AI regulation (SB 1047). While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington.”
Behind the news: SB 53 modifies parts of SB 1047, which Governor Newsom vetoed in 2024 after opposition from the tech community. That law would have required third-party audits and made companies liable for the uses of their models. Recently, Newsom also vetoed SB 7, which would have required employers to notify employees and applicants if AI systems were used to make employment decisions like hiring and firing.
Why it matters: California is the largest U.S. state by the sizes of its population and economy, as well as home of many of the world’s most prominent tech companies and startups, including Google, OpenAI, and Anthropic. These laws will affect users of CA-based tech worldwide along with companies that do business in the state.
We’re thinking: While these laws are better for the users, innovators, and businesses than the vetoed SB 1047, some of them perpetuate a major mistake of that legislation by placing regulatory burdens on models rather than applications. A model’s potential applications are unknown until someone implements them, and it makes no sense to limit — or burden with disclosure requirements — the good it might do. Applications, on the other hand, bring verifiable benefits and harms, and society would do well to limit the harms.