Coding Agents Are Evolving From Novelties to Widely Useful Tools: Three reseach papers offer outstanding ways to use large language models to build coding agents that perform software development tasks automatically.
Letters

Coding Agents Are Evolving From Novelties to Widely Useful Tools: Three reseach papers offer outstanding ways to use large language models to build coding agents that perform software development tasks automatically.

On Father’s Day last weekend, I sat with my daughter to help her practice solving arithmetic problems.
Welcoming Diverse Approaches Keeps Machine Learning Strong: What technology counts as an “agent”? Instead of arguing, let's consider a spectrum along which various technologies are “agentic.”
Letters

Welcoming Diverse Approaches Keeps Machine Learning Strong: What technology counts as an “agent”? Instead of arguing, let's consider a spectrum along which various technologies are “agentic.”

One reason for machine learning’s success is that our field welcomes a wide range of work.
Blenders Versus Bombs, or Why California's Proposed AI Law is Bad for Everyone: California's proposed AI law SB-1047 stifle innovation and open source in the name of safety.
Letters

Blenders Versus Bombs, or Why California's Proposed AI Law is Bad for Everyone: California's proposed AI law SB-1047 stifle innovation and open source in the name of safety.

The effort to protect innovation and open source continues. I believe we’re all better off if anyone can carry out basic AI research and share their innovations.
We Need Better Evals for LLM Applications: It’s hard to evaluate AI applications built on large language models. Better evals would accelerate progress.
Letters

We Need Better Evals for LLM Applications: It’s hard to evaluate AI applications built on large language models. Better evals would accelerate progress.

A barrier to faster progress in generative AI is evaluations (evals), particularly of custom AI applications that generate free-form text.
Project Idea — A Car for Dinosaurs: AI projects don’t need to have a meaningful deliverable. Lower the bar and do something creative.
Letters

Project Idea — A Car for Dinosaurs: AI projects don’t need to have a meaningful deliverable. Lower the bar and do something creative.

A good way to get started in AI is to start with coursework, which gives a systematic way to gain knowledge, and then to work on projects.
Beware Bad Arguments Against Open Source: Big companies are lobbying governments to limit open source AI. Their shifting arguments betray their self-serving motivations.
Letters

Beware Bad Arguments Against Open Source: Big companies are lobbying governments to limit open source AI. Their shifting arguments betray their self-serving motivations.

Inexpensive token generation and agentic workflows for large language models (LLMs) open up intriguing new possibilities for training LLMs on synthetic data...
Building Models That Learn From Themselves: AI developers are hungry for more high-quality training data. The combination of agentic workflows and inexpensive token generation could supply it.
Letters

Building Models That Learn From Themselves: AI developers are hungry for more high-quality training data. The combination of agentic workflows and inexpensive token generation could supply it.

Inexpensive token generation and agentic workflows for large language models (LLMs) open up intriguing new possibilities for training LLMs on synthetic data. Pretraining an LLM
Why We Need More Compute for Inference: Today, large language models produce output primarily for humans. But agentic workflows produce lots of output for the models themselves — and that will require much more compute for AI inference.
Letters

Why We Need More Compute for Inference: Today, large language models produce output primarily for humans. But agentic workflows produce lots of output for the models themselves — and that will require much more compute for AI inference.

Much has been said about many companies’ desire for more compute (as well as data) to train larger foundation models.
Proposed ChatDev architecture, illustrated.
Letters

Agentic Design Patterns Part 5, Multi-Agent Collaboration: Prompting an LLM to play different roles for different parts of a complex task summons a team of AI agents that can do the job more effectively.

Multi-agent collaboration is the last of the four key AI agentic design patterns that I’ve described in recent letters.
Agentic Design Patterns Part 4, Planning: Large language models can drive powerful agents to execute complex tasks if you ask them to plan the steps before they act.
Letters

Agentic Design Patterns Part 4, Planning: Large language models can drive powerful agents to execute complex tasks if you ask them to plan the steps before they act.

Planning is a key agentic AI design pattern in which we use a large language model (LLM) to autonomously decide on what sequence of steps to execute to accomplish a larger task.
Agentic Design Patterns Part 3, Tool Use: How large language models can act as agents by taking advantage of external tools for search, code execution, productivity, ad infinitum
Letters

Agentic Design Patterns Part 3, Tool Use: How large language models can act as agents by taking advantage of external tools for search, code execution, productivity, ad infinitum

Tool use, in which an LLM is given functions it can request to call for gathering information, taking action, or manipulating data, is a key design pattern of AI agentic workflows.
Agentic Design Patterns Part 2, Reflection: Large language models can become more effective agents by reflecting on their own behavior.
Letters

Agentic Design Patterns Part 2, Reflection: Large language models can become more effective agents by reflecting on their own behavior.

Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress this year: Reflection, Tool use, Planning and Multi-agent collaboration.
Agentic Design Patterns Part 1: Four AI agent strategies that improve GPT-4 and GPT-3.5 performance
Letters

Agentic Design Patterns Part 1: Four AI agent strategies that improve GPT-4 and GPT-3.5 performance

I think AI agent workflows will drive massive AI progress this year — perhaps even more than the next generation of foundation models. This is an important trend, and I urge everyone who works in AI to pay attention to it.
Life in Low Data Gravity: With generative AI, data is bound less tightly to the cloud provider where it’s stored. This has big implications for developers, CIOs, and cloud platforms.
Letters

Life in Low Data Gravity: With generative AI, data is bound less tightly to the cloud provider where it’s stored. This has big implications for developers, CIOs, and cloud platforms.

I’ve noticed a trend in how generative AI applications are built that might affect both big companies and developers: The gravity of data is decreasing.
The Dawning Age of Agents: LLM-based agents that act autonomously are making rapid progress. Here's what we have to look forward to.
Letters

The Dawning Age of Agents: LLM-based agents that act autonomously are making rapid progress. Here's what we have to look forward to.

Progress on LLM-based agents that can autonomously plan out and execute sequences of actions has been rapid, and I continue to see month-over-month improvements.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox