Fine-tuning and Reinforcement Learning for LLMs: Intro to Post-Training
Learn to align and optimize LLMs for real-world applications through post-training. In this course, created in partnership with AMD, you’ll learn how to apply fine-tuning and reinforcement learning techniques to shape model behavior, improve reasoning, and make LLMs safer and more reliable.
Enroll NowIN COLLABORATION WITH

Turn pretrained LLMs into production-ready models through post-training
- Align a pretrained model for real tasks: use SFT and RLHF to improve instruction following, reasoning, and safer behavior.
- Use evaluation to guide improvements: build evals that reveal problems, choose data and rewards accordingly, and iterate.
- Get models ready for production, cost-aware: plan promotion and serving, monitor reliably, and account for compute and budget.
Why Enroll
Large language models are powerful, but raw pretrained models aren’t ready for production applications. Post-training is what adapts an LLM to follow instructions, show reasoning, and behave more safely.
Many developers still assume “LLMs inherently hallucinate,” or “only experts can tune models.” Recent advances have changed what’s feasible. If you ship LLM features (e.g., developer copilots, customer support agents, internal assistants) or work on ML/AI platform teams, understanding post-training is becoming a must-have skill.
This course, consisting of 5 modules and taught by Sharon Zhou (VP of AI at AMD and instructor to popular DeepLearning.AI courses), will guide you through various aspects of post-training:
- Post-training in the LLM lifecycle: Learn where post-training fits, key ideas in fine-tuning and RL, how models gain reasoning, and how these methods power products.
- Core techniques: Understand fine-tuning, RLHF, reward modeling, and RL algorithms (PPO, GRPO). Use LoRA for efficient fine-tuning.
- Evaluation and error analysis: Design evals, detect reward hacking, diagnose failures, and red team to test model robustness.
- Data for post-training: Prepare fine-tuning/LoRA datasets, combine fine-tuning + RLHF, create synthetic data, and balance data and rewards.
- From post-training to production: Learn industry-leading production pipelines, set go/no-go rules, and run data feeedback loops from your logs.
In partnership with

We built this course with AMD to bring post-training practices used in leading labs to working engineers. You’ll get hands-on labs powered by AMD GPUs, while the methods you learn remain hardware-agnostic.
Who should join?
This course is designed for developers, ML engineers, software engineers, data scientists, and students who want to apply post-training techniques to production LLM systems. It’s also valuable for product managers and technical leaders who need to make informed decisions about post-training strategies and lead cross-functional teams working on LLM products.
To make the most of this course, we recommend strong familiarity with Python and a basic understanding of how LLMs work.
Instructor
Sharon Zhou
Sharon Zhou is the Vice President of AI at AMD. As a former Stanford faculty member, she led a research group in generative AI and published award-winning papers in generative AI. Sharon teaches some of the most popular courses on Coursera, including Finetuning LLMs, reaching millions of professionals. She received her PhD in AI from Stanford, advised by Andrew Ng. Sharon has been featured in MIT Technology Review’s 35 Under 35 list.
What Learners From Previous Courses Say About DeepLearning.AI
Frequently Asked Questions
Join today and be on the forefront of the next generation of AI!
Want to learn more about Generative AI?
Keep learning with updates on curated AI news, courses, events, as well as Andrew’s thoughts from DeepLearning.AI!
