AI On the Agenda at the World Economic Forum The World Economic Forum at Davos felt like an AI conference, with big takeaways for business and regulation.

Published
Reading time
3 min read
World Economic Forum, January 16, 2024: "The Expanding Universe of Generative Models" panelists

Dear friends,

Last week, I attended the World Economic Forum, an annual meeting of leaders in government, business, and culture at Davos, Switzerland. I spoke in a few sessions, including a lively discussion with Aiden Gomez, Daphne Koller, Yann LeCun, Kai-Fu Lee, and moderator Nicholas Thompson about the present and possible future technology developments of generative AI. You can watch it here.

The conference's themes included AI, climate change, economic growth, and global security. But to me, the whole event felt like an AI conference! (This is not just my bias. When I asked a few non-AI attendees whether they felt similarly, about three-quarters of them agreed with me.) I had many conversations along two major themes:

Business implementation of AI. Many businesses, and to a lesser extent governments, are looking at using AI and trying to develop best practices for doing so. In some of my presentations, I shared my top two tips:

  • Almost all knowledge workers can become more productive right away by using a large language model (LLM) like ChatGPT or Bard as a brainstorming partner, copyeditor, tool to answer basic questions, and so on. But many people still need to be trained to use these models safely and effectively. I also encouraged CEOs to learn to use these tools themselves, so they can lead from the top.
  • In addition to using an LLM’s web interface, API calls offer many new opportunities to build new AI applications. I shared a task-based analysis framework and described how an analysis like this can lead to buy-versus-build decisions to pursue identified opportunities, with build being either an in-house project or a spin-out.

AI regulation. With many governments represented at Davos, many discussions about AI regulation also took place. I was delighted that he conversation has become much more sensible compared to 6 months ago, when the narrative was driven by misleading analogies between AI and nuclear weapons and lobbyists had significant momentum pushing proposals that threatened open-source software. However, the fight against stifling regulations isn't over yet! We must continue to protect open-source software and innovation. In detail:

  • I am happy to report that, in many hours of conversation about AI and regulations, I heard only one person bring up AI leading to human extinction, and the conversation quickly turned to other topics. I'm cautiously optimistic that this particular fear — of an outcome that is overwhelmingly unlikely — is losing traction and fading away.
  • However, big companies, especially ones that would rather not have to compete with open source, are still pushing for stifling, anti-competitive AI regulations in the name of safety. For example, some are still using the argument, “don't we want to know if your open-source LLMs are safe?” to promote potentially onerous testing, reporting, and perhaps even licensing requirements on open-source software. While we would, of course, prefer safe models (just as we would prefer secure software and truthful speech), overly burdensome “protections” could still destroy much innovation without materially reducing harm.  
  • Fortunately, many regulators are now aware of the need to protect basic research and development. The battle is still on to make sure we can continue to freely distribute the fruits of R&D, including open-sourcing software. But I'm encouraged by the progress we've made in the last few months. 

I also went to some climate sessions to listen to speakers. Unfortunately, I came away from them feeling more pessimistic about what governments and corporations are doing on decarbonization and climate change. I will say more about this in future letters, but:

  • Although some experts still talk about 1.5 degrees of warming as an optimistic scenario and 2 degrees as a pessimistic scenario, my own view after reviewing the science is that 2 degrees is a very optimistic scenario, and 4 degrees is a more realistic pessimistic scenario.
  • Unfortunately, this overoptimism is causing us to underinvest in resilience and adaptation (to help us better weather the coming changes) as well as put less effort into exploring potentially game-changing technologies like geo-engineering. 

Davos a cold city where temperatures are often below freezing. In one memorable moment at the conference, I had lost my gloves and my hands were freezing. A stranger whom I had met only minutes ago kindly gave me an extra pair. This generous act reminded me that, even as we think about the global impacts of AI and climate change, simple human kindness touches people's hearts and reminds us that the ultimate purpose of our work is to help people.

Keep learning!

Andrew

P.S. Check out our new short course on “Automated Testing for LLMOps,” taught by CircleCI CTO Rob Zuber! This course teaches how you can adapt key ideas from continuous integration (CI), a pillar of efficient software engineering, to building applications based on large language models (LLMs). Tweaking an LLM-based app can have unexpected side effects, and having automated testing as part of your approach to LLMOps (LLM Operations) helps avoid these problems. CI is especially important for AI applications given the iterative nature of AI development, which often involves many incremental changes. Please sign up here.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox