Keep Open Source Free! Regulators threaten to restrict open source development. That would be a huge mistake.

Published
Reading time
3 min read
Guest panel, including Andrew Ng, at the AI Governance Summit 2023 by the World Economic Forum

Dear friends,

This week, I’m speaking at the World Economic Forum (WEF) and Asia-Pacific Economic Cooperation (APEC) meetings in San Francisco, where leaders in business and government have convened to discuss AI and other topics. My message at both events is simple: Governments should not outlaw open source software or pass regulations that stifle open source development. 

Regulating AI is a hot topic right now in the United States, European Union, and elsewhere. Just this week, the EU’s AI Act was derailed when France and Germany objected — with good reason, in my view — to provisions that would burden companies that build foundation models.

As Yann LeCun and I have said, it’s important to distinguish between regulating technology (such a foundation model trained by a team of engineers) and applications (such as a website that uses a foundation model to offer a chat service, or a medical device that uses a foundation model to interacts with patients). We need good regulations to govern AI applications, but ill-advised proposals to regulate the technology would slow down AI development unnecessarily. While the EU’s AI Act thoughtfully addresses a number of AI applications — such as ones that sort job applications or predict crime — and assesses their risks and mandates mitigations, it imposes onerous reporting requirements on companies that develop foundation models, including organizations that aim to release open-source code. 

I wrote in an earlier letter that some companies that would rather not compete with open-source, as well as some nonprofits and individuals, are exaggerating AI risks. This creates cover for legislators to pass regulations in the name of safety that will hamper open source. At WEF and APEC, I’ve had conversations about additional forces at play. Let me describe what I’m seeing. 

In the U.S., a faction is worried about the nation’s perceived adversaries using open source technology for military or economic advantage. This faction is willing to slow down availability of open source to deny adversaries’ access. I, too, would hate to see open source used to wage unjust wars. But the price of slowing down AI progress is too high. AI is a general-purpose technology, and its beneficial uses — similar to other general purpose technologies like electricity — far outstrip the nefarious ones. Slowing it down would be a loss for humanity. 

When I speak with senior U.S. government officials, I sense that few think the possibility that AI will lead to human extinction is a realistic risk. This topic tends to lead to eye-rolls. But they genuinely worry about AI risks such as disinformation. In comparison, the EU is more concerned — unnecessarily, in my view — about the risk of extinction, while also worried about other, more concrete harms. 

Many nations and corporations are coming to realize they will be left behind if regulation stifles open source. After all, the U.S. has a significant concentration of generative AI talent and technology. If we raise the barriers to open source and slow down the dissemination of AI software, it will only become harder for other nations to catch up. Thus, while some might argue that the U.S. should slow down dissemination of AI (an argument that I disagree with), that certainly would not be in the interest of most nations. 

I believe deeply that the world is better off with more intelligence, whether human intelligence or artificial intelligence. Yes, intelligence can be used for nefarious purposes. But as society has developed over centuries and we have become smarter, humanity has become much better off.

A year ago, I wouldn’t have thought that so many of us would have to spend so much time trying to convince governments not to outlaw, or make impractical, open-sourcing of advanced AI technology. But I hope we can all keep on pushing forward on this mission, and keep on pushing to make sure this wonderful technology is accessible to all.

Keep learning!

Andrew

P.S. Many teams that build applications based on large language models (LLMs) worry about their safety and security, and such worries are a significant barrier to shipping products. For example, might the application leak sensitive data, or be tricked into generating inappropriate outputs? Our new short course shows how you can mitigate hallucinations, data leakage, and jailbreaks. Learn more in “Quality and Safety for LLM Applications,” taught by Bernease Herman and created in collaboration with WhyLabs (disclosure: an AI Fund portfolio company). Available now

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox