Short Course

Quality and Safety for LLM Applications

In Collaboration With




1 Hour


Bernease Herman

  • Monitor and enhance security measures over time to safeguard your LLM applications.

  • Detect and prevent critical security threats like hallucinations, jailbreaks, and data leakage.

  • Explore real-world scenarios to prepare for potential risks and vulnerabilities.

What you’ll learn in this course

It’s always crucial to address and monitor safety and quality concerns in your applications. Building LLM applications poses special challenges.

In this course, you’ll explore new metrics and best practices to monitor your LLM systems and ensure safety and quality. You’ll learn how to: 

  • Identify hallucinations with methods like SelfCheckGPT 
  • Detect jailbreaks (prompts that attempt to manipulate LLM responses) using sentiment analysis and implicit toxicity detection models.
  • Identify data leakage using entity recognition and vector similarity analysis.
  • Build your own monitoring system to evaluate app safety and security over time.

Upon completing the course, you’ll have the ability to identify common security concerns in LLM-based applications, and be able to customize your safety and security evaluation tools to the LLM that you’re using for your application.

Who should join?

Anyone with basic Python knowledge interested in mitigating issues like hallucinations, prompt injections, and toxic outputs.


Bernease Herman

Bernease Herman

Data Scientist at WhyLabs

Course access is free for a limited time during the DeepLearning.AI learning platform beta!

Want to learn more about Generative AI?

Keep learning with updates on curated AI news, courses, and events, as well as Andrew’s thoughts from DeepLearning.AI!