My hope is that open AI continues to flourish and ultimately wins.
An open ecosystem has always been the engine of real innovation. It lets communities build on decades of progress and tap into collective talent, not just the resources of a single company. Back in the 1990s, open software like Linux, Apache, and Eclipse challenged the dominant proprietary systems. That fight shaped the internet as we know it. Now, the same principles must guide AI’s evolution.
The parallels are eerie. Some players are trying to own and control AI by doing all the same things Microsoft did back when it was dumping free copies of Windows into developing markets to help keep Linux from gaining a foothold.
Like Microsoft’s free floppy disks of yore, Open AI and Meta both dropped so-called open models that were not actually open. They didn’t disclose anything about their training sets or formulas, they had caps on how much revenue you could make. All this is designed to prevent anyone else from getting traction and making your product the one that wins.
But the potential of truly open AI is so important. It’s important that AI is not owned by anybody and doesn’t represent the values of only one company. And it’s important that everyone can help shape its future, whether it’s people at other companies, academics, or ordinary users.
Open development has important benefits. First, it reduces the odds of vendor lock-in. Nobody wants to be stuck on a proprietary model that’s behind an API that becomes critical infrastructure. Second, it enables greater customization. Not only do you legally have the ability to customize an open model (including using it to make a closed one), but it’s easier to remake it because you know something about how it was made.
There is a thriving open ecosystem in China right now. Those developers have done amazing work. But there’s also a weird geopolitical overlay. Countries don’t trust other countries. China doesn’t trust the U.S., the U.S. doesn’t trust China, and Europe doesn’t trust either. And it can be easy to poison a model by training on compromised data. Genuinely open development solves that, because everyone knows what the training sets were and how they were obtained.
At IBM, we’ve been walking this talk. We publish the details of our models, how they were trained, and especially what data they’re trained on. The Stanford Transparency Index put us at the very top, with a score of 95 percent, 23 points ahead of second place. And we’re not the only ones. The Allen Institute has done really impressive work, and developers in China are walking the walk — but then there’s this weird geopolitical overlay. The U.S. doesn’t trust models from China, and China doesn’t trust models from the U.S.
We know IBM has a reputation for being boring. But boring can actually be good. Boring is stable; it’s a foundation you can build on. IBM is also a little weird. That stable foundation actually lets you do weird things without them falling apart. Let’s make AI more open, more weird, and maybe a little more boring in 2026.
David Cox is the VP for AI models at IBM Research and the IBM Director of the MIT-IBM Watson AI Lab. Previously he taught natural sciences, applied sciences, and engineering at Harvard.