Understand why stateless agents fail at long-horizon tasks and how memory-first architecture gives agents persistence and the ability to learn across sessions.
Agent Memory: Building Memory-Aware Agents
Instructors: Richmond Alake, Nacho Martínez
Earn an accomplishment with PRO
- Intermediate
- 7 Video Lessons
- 4 Code Examples
- 1 Graded Assignment PRO
- Earn an accomplishment with PRO
- Instructors: Richmond Alake, Nacho Martínez
Oracle- Learn more aboutMembership PRO Plan
What you'll learn
Build a Memory Manager that handles different memory types, and a semantic tool retrieval system that scales agent tool use without bloating the context window.
Implement memory extraction, consolidation, and write-back pipelines that let your agent autonomously update and refine what it knows over time.
About this course
Introducing Agent Memory: Building Memory-Aware Agents, a short course built in partnership with Oracle and taught by Richmond Alake and Nacho Martínez.
Most agents work well within a single session but lose everything the moment it ends. Memory engineering treats long-term memory as first-class infrastructure: external to the model, persistent, and structured. In this course, you’ll learn how to build that infrastructure using Oracle AI Database, LangChain, and LLM-powered pipelines.
You’ll design a complete memory system that stores and retrieves different memory types, scales tool access using semantic search, and builds write-back loops that allow agents to update their own memory autonomously. By the end, you’ll have assembled a fully stateful Memory Aware Agent that loads prior context at startup, assembles relevant context, state, tools, and outputs and improves across sessions.
In detail, you’ll:
- Explore why stateless agents fail at long-horizon tasks and learn the memory-first architecture, including the agent stack and memory core
- Build persistent memory stores for different agent memory types and implement a Memory Manager that orchestrates how your agent reads, writes, and retrieves memory during execution
- Treat tools as procedural memory stored in a memory-backed store, and retrieve only the relevant ones at inference time using semantic search, solving the scaling problem of agents with hundreds of tools
- Build pipelines that extract structured facts from conversations, consolidate episodic memory into semantic memory, and create write-back loops that let your agent update and refine its own memory autonomously
- Assemble a fully stateful Memory Aware Agent with a startup routine that loads prior context, a recursive reasoning loop and persistence across sessions
Going beyond single-session interactions requires the right memory infrastructure, and this course gives you the hands-on patterns to build agents that don’t just respond, they remember and improve.
Who should join?
Developers building AI agents who want to go beyond single-session interactions. Familiarity with Python and basic LLM concepts is recommended.
Course Outline
7 Lessons・4 Code Examples- IntroductionVideo・2 mins
- Why AI Agents Need MemoryVideo・18 mins
- Constructing The Memory ManagerVideo with Code Example・22 mins
- Scaling Agent Tool Use with Semantic Tool MemoryVideo with Code Example・17 mins
- Memory Operations: Extraction, Consolidation, and Self-Updating MemoryVideo with Code Example・23 mins
- Memory Aware AgentVideo with Code Example・20 mins
- ConclusionVideo・1 min
- Extra resourcesReading・1 min
- Quiz
Graded・Quiz
・10 mins

Elevate your learning experience with Pro
Upgrade to Pro and gain unlimited accomplishments on your resume
Instructors
Additional learning features, such as quizzes and projects, are included with DeepLearning.AI Pro. Explore it today
Want to learn more about Generative AI?
Keep learning with updates on curated AI news, courses, and events, as well as Andrew’s thoughts from DeepLearning.AI!


