The United States has been a leader in science and technology for decades, and all nations have benefitted from its innovations. But U.S. leadership in AI is not guaranteed. Should the country slip as a center of AI innovation and entrepreneurship, its contributions would be curtailed and the technology less likely to embody democratic values. I hope that 2021 will see a firm commitment from the U.S. federal government to support innovation in AI.

The U.S. has excelled in science and technology largely because its ecosystem for innovation leverages contributions from academia, government, and industry. However, the emergence of AI has tipped the balance toward industry, largely because the three most important resources for AI research and development — computing power, data, and talent — are concentrated in a small number of companies. For instance, to train the large-scale language model GPT-3, OpenAI in partnership with Microsoft may have consumed compute resources worth $5 million to $10 million, according to one analysis. No U.S. university has ready access to this scale of computation.

Equally critical for advancing AI are large amounts of data. The richest troves of data today are locked behind the walls of large companies. Lack of adequate compute and data handicaps academic researchers and accelerates the brain drain of top AI talent from academia to private companies.

The year 2020 brought renewed federal support for universities and colleges. But more needs to be done. At the Stanford Institute for Human-Centered Artificial Intelligence (HAI), which I co-direct with John Etchemendy, we have proposed a National Research Cloud. This initiative would devote $1 billion to $10 billion per year over 10 years to recharge the partnership between academia, government, and industry. It would give U.S. academic researchers the compute and data they need to stay on the cutting edge, which in turn would attract and retain new crops of faculty and students, potentially reversing the current exodus of researchers from academia to industry.

The fruits of this effort would be substantial. For instance, I’ve spent many years working on ambient AI sensors for healthcare delivery. These devices could help seniors who need chronic disease management by enabling caregivers to remotely track treatments and results, potentially saving hundreds of thousands of lives annually in the U.S. Such technology has no borders: The innovation created at Stanford could help aging societies worldwide. Renewed ferment in AI research also could bring innovations to mitigate climate change, develop life-saving drugs, optimize food and water supplies, and improve operations within the government itself.

We’re encouraged by the progress we’ve already seen toward the National Research Cloud. The U.S. Congress is considering bipartisan legislation that would establish a task force to study this goal. Meanwhile, agencies including the National Science Foundation and National Institutes of Health have issued calls for proposals for AI projects that such an initiative would support.

AI is a tool, and a profoundly powerful one. But every tool is a double-edged sword, and the ways it’s applied inevitably reflect the values of its designers, developers, and implementers. Many challenges remain to ensure that AI is safe and fair, respects values fundamental to democratic societies, protects individual privacy, and benefits a wide swath of humanity. Invigorating the healthy public ecosystem of AI research is a critical part of this effort.

Fei-Fei Li is the Sequoia Professor of Computer Science and Denning Co-Director of the Institute for Human-Centered Artificial Intelligence at Stanford. She is an elected member of the National Academy of Engineering and National Academy of Medicine.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox