Agentic Design Patterns Part 3, Tool Use: How large language models can act as agents by taking advantage of external tools for search, code execution, productivity, ad infinitum
Letters

Agentic Design Patterns Part 3, Tool Use: How large language models can act as agents by taking advantage of external tools for search, code execution, productivity, ad infinitum

Tool use, in which an LLM is given functions it can request to call for gathering information, taking action, or manipulating data, is a key design pattern of AI agentic workflows.
Agentic Design Patterns Part 2, Reflection: Large language models can become more effective agents by reflecting on their own behavior.
Letters

Agentic Design Patterns Part 2, Reflection: Large language models can become more effective agents by reflecting on their own behavior.

Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress this year: Reflection, Tool use, Planning and Multi-agent collaboration.
Agentic Design Patterns Part 1: Four AI agent strategies that improve GPT-4 and GPT-3.5 performance
Letters

Agentic Design Patterns Part 1: Four AI agent strategies that improve GPT-4 and GPT-3.5 performance

I think AI agent workflows will drive massive AI progress this year — perhaps even more than the next generation of foundation models. This is an important trend, and I urge everyone who works in AI to pay attention to it.
Life in Low Data Gravity: With generative AI, data is bound less tightly to the cloud provider where it’s stored. This has big implications for developers, CIOs, and cloud platforms.
Letters

Life in Low Data Gravity: With generative AI, data is bound less tightly to the cloud provider where it’s stored. This has big implications for developers, CIOs, and cloud platforms.

I’ve noticed a trend in how generative AI applications are built that might affect both big companies and developers: The gravity of data is decreasing.
The Dawning Age of Agents: LLM-based agents that act autonomously are making rapid progress. Here's what we have to look forward to.
Letters

The Dawning Age of Agents: LLM-based agents that act autonomously are making rapid progress. Here's what we have to look forward to.

Progress on LLM-based agents that can autonomously plan out and execute sequences of actions has been rapid, and I continue to see month-over-month improvements.
Illustration of a Python inside a cardboard box
Letters

The Python Package Problem: Python packages can give your software superpowers, but managing them is a barrier to AI development.

I think the complexity of Python package management holds down AI application development more than is widely appreciated. AI faces multiple bottlenecks — we need more GPUs, better algorithms, cleaner data in large quantities.
Three Themes for AI Entrepreneurs: Starting an AI company? Three keys to AI entrepreneurship emerged at AI Fund’s annual co-founder and CEO summit.
Letters

Three Themes for AI Entrepreneurs: Starting an AI company? Three keys to AI entrepreneurship emerged at AI Fund’s annual co-founder and CEO summit.

Earlier this month, my team AI Fund held its annual co-founder and CEO summit, where many of our collaborators gathered in California for two days to discuss how to build AI companies.
How to Think About the Privacy of Cloud-Based AI: How private is your data on cloud-based AI platforms? Here's a framework for evaluating risks.
Letters

How to Think About the Privacy of Cloud-Based AI: How private is your data on cloud-based AI platforms? Here's a framework for evaluating risks.

The rise of cloud-hosted AI software has brought much discussion about the privacy implications of using it. But I find that users, including both consumers and developers building on such software
What If Large Language Models Become a Commodity?: Large language models are proliferating. What are the prospects for Amazon, Google, Meta, Microsoft, OpenAI, and LLM startups?
Letters

What If Large Language Models Become a Commodity?: Large language models are proliferating. What are the prospects for Amazon, Google, Meta, Microsoft, OpenAI, and LLM startups?

On the LMSYS Chatbot Arena Leaderboard, which pits chatbots against each other anonymously and prompts users to judge which one generated a better answer...
The World Needs More Intelligence: Human intelligence is expensive, artificial intelligence is cheap. To solve big problems like climate change, it makes sense to double down on AI.
Letters

The World Needs More Intelligence: Human intelligence is expensive, artificial intelligence is cheap. To solve big problems like climate change, it makes sense to double down on AI.

Last year, a number of large businesses and individuals went to the media and governments and pushed the message that AI is scary, impossible to control, and might even lead to human extinction. Unfortunately they succeeded: Now many people think AI is scary.
World Economic Forum, January 16, 2024: "The Expanding Universe of Generative Models" panelists
Letters

AI On the Agenda at the World Economic Forum: The World Economic Forum at Davos felt like an AI conference, with big takeaways for business and regulation.

Last week, I attended the World Economic Forum, an annual meeting of leaders in government, business, and culture at Davos, Switzerland.
The Easiest Way to Achieve Artificial General Intelligence: Coming up with scientific definitions of ambiguous terms like consciousness and sentience can spur progress but mislead the public.
Letters

The Easiest Way to Achieve Artificial General Intelligence: Coming up with scientific definitions of ambiguous terms like consciousness and sentience can spur progress but mislead the public.

As I wrote in an earlier letter, whether AI is sentient or conscious is a philosophical question rather than a scientific one, since there is no widely agreed-upon definition and test for these terms.
Outstanding Research Without Massive Compute: Researchers at Stanford and Chan Zuckerberg Biohub Network dramatically simplified a key algorithm for training large language models.
Letters

Outstanding Research Without Massive Compute: Researchers at Stanford and Chan Zuckerberg Biohub Network dramatically simplified a key algorithm for training large language models.

It is only rarely that, after reading a research paper, I feel like giving the authors a standing ovation. But I felt that way after finishing Direct Preference Optimization (DPO) by...
The New York Times versus OpenAI and Microsoft: The New York Times sued OpenAI and Microsoft for copyright infringement, but the real issues and harms are not clear.
Letters

The New York Times versus OpenAI and Microsoft: The New York Times sued OpenAI and Microsoft for copyright infringement, but the real issues and harms are not clear.

Last week, the New York Times (NYT) filed a lawsuit against OpenAI and Microsoft, alleging massive copyright infringements. The suit claims, among other things, that OpenAI and Microsoft used millions of copyrighted NYT articles to train their models...
Andrew Ng holding a computer while sitting next to a chimney
Letters

What Will Change — And What Will Stay the Same: Change is hard to predict, so invest your time and energy in things that aren't likely to change.

AI is progressing faster than ever. This is thrilling, yet rapid change can be disorienting. In such times, it’s useful to follow Jeff Bezos’ advice to think about not only what is changing but also what will stay the same.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox