Blenders Versus Bombs, or Why California’s Proposed AI Law is Bad for Everyone California’s proposed AI law SB-1047 stifles innovation and open source in the name of safety.

Published
Jun 5, 2024
Reading time
3 min read
Blenders Versus Bombs, or Why California’s Proposed AI Law is Bad for Everyone: California’s proposed AI law SB-1047 stifles innovation and open source in the name of safety.

Dear friends,

The effort to protect innovation and open source continues. I believe we’re all better off if anyone can carry out basic AI research and share their innovations. Right now, I’m deeply concerned about California's proposed law SB-1047. It’s a long, complex bill with many parts that require safety assessments, shutdown capability for models, and so on.

There are many things wrong with this bill, but I’d like to focus here on just one: It defines an unreasonable “hazardous capability” designation that may make builders of large AI models potentially liable if someone uses their models to do something that exceeds the bill’s definition of harm (such as causing $500 million in damage). That is practically impossible for any AI builder to ensure. If the bill is passed in its present form, it will stifle AI model builders, especially open source developers. 

Some AI applications, for example in healthcare, are risky. But as I wrote previously, regulators should regulate applications rather than technology

  • Technology refers to tools that can be applied in many ways to solve various problems.
  • Applications are specific implementations of technologies designed to meet particular customer needs.

For example, an electric motor is a technology. When we put it in a blender, an electric vehicle, dialysis machine, or guided bomb, it becomes an application. Imagine if we passed laws saying, if anyone uses a motor in a harmful way, the motor manufacturer is liable. Motor makers would either shut down or make motors so tiny as to be useless for most applications. If we pass such a law, sure, we might stop people from building guided bombs, but we’d also lose blenders, electric vehicles, and dialysis machines. In contrast, if we look at specific applications, like blenders, we can more rationally assess risks and figure out how to make sure they’re safe, and even ban classes of applications, like certain types of munitions. 

Safety is a property of applications, not a property of technologies (or models), as Arvind Narayanan and Sayash Kapoor have pointed out. Whether a blender is a safe one can’t be determined by examining the electric motor. A similar argument holds for AI.  

SB-1047 doesn’t account for this distinction. It ignores the reality that the number of beneficial uses of AI models is, like electric motors, vastly greater than the number of harmful ones. But, just as no one knows how to build a motor that can’t be used to cause harm, no one has figured out how to make sure an AI model can’t be adapted to harmful uses. In the case of open source models, there’s no known defense to fine-tuning to remove RLHF alignment. And jailbreaking work has shown that even closed-source, proprietary models that have been properly aligned can be attacked in ways that make them give harmful responses. Indeed, the sharp-witted Pliny the Prompter regularly tweets about jailbreaks for closed models. Kudos also to Anthropic’s Cem Anil and collaborators for publishing their work on many-shot jailbreaking, an attack that can get leading large language models to give inappropriate responses and is hard to defend against. 

California has been home to a lot of innovation in AI. I’m worried that this anti-competitive, anti-innovation proposal has gotten so much traction in the legislature. Worse, other jurisdictions often follow California, and it would be awful if they were to do so in this instance.

SB-1047 passed in a key vote in the State Senate in May, but it still has additional steps before it becomes law. I hope you will speak out against it if you get a chance to do so.

Keep learning!

Andrew 

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox