Automated Testing for LLMOps
Free for a limited time
- Learn how LLM-based testing differs from traditional software testing and implement rules-based testing to assess your LLM application.
- Build model-graded evaluations to test your LLM application using an evaluation LLM.
- Automate your evals (rules-based and model-graded) using continuous integration tools from CircleCI.
What you’ll learn in this course
In this course, you will learn how to create a continuous integration (CI) workflow to evaluate your LLM applications at every change for faster, safer, and more efficient application development.
When building applications with generative AI, model behavior is less predictable than traditional software. That’s why systematic testing can make an even bigger difference in saving you development time and cost.
Continuous integration, a key part of LLMOps, is the practice of making small changes to software in development and thoroughly testing them to catch issues early when they are easier to fix. With a robust automated testing pipeline, you’ll be able to isolate bugs before they accumulate – when they’re easier and less costly to fix. Automated testing lets your team focus on building new features, so that you can iterate and ship products faster.
After completing this course, you will be able to:
- Write robust LLM evaluations to cover common problems like hallucinations, data drift, and harmful or offensive output.
- Build a continuous integration (CI) workflow to automatically evaluate every change to your application.
- Orchestrate your CI workflow to run specific evaluations at different stages of development.
Who should join?
Anyone with basic Python knowledge and familiarity with building LLM-based applications.
Want to learn more about Generative AI?
Keep learning with updates on curated AI news, courses, and events, as well as Andrew’s thoughts from DeepLearning.AI!