Short Course

Evaluating and Debugging Generative AI Models Using Weights and Biases

In Collaboration With

Weights and Biases

Intermediate

>

1 Hour

>

Carey Phelps

  • Learn to evaluate programs utilizing LLMs as well as generative image models using platform-independent tools

  • Instrument a training notebook, and add tracking, versioning, and logging

  • Implement monitoring and tracing of LLMs over time in complex interactions

What you’ll learn in this course

Machine learning and AI projects require managing diverse data sources, vast data volumes, model and parameter development, and conducting numerous test and evaluation experiments. Overseeing and tracking these aspects of a program can quickly become an overwhelming task.

This course will introduce you to Machine Learning Operations tools that manage this workload. You will learn to use the Weights & Biases platform which makes it easy to track your experiments, run and version your data, and collaborate with your team.

This course will teach you to:

  • Instrument a Jupyter notebook
  • Manage hyperparameter config
  • Log run metrics
  • Collect artifacts for dataset and model versioning
  • Log experiment results
  • Trace prompts and responses to LLMs over time in complex interactions

When you complete this course, you will have a systematic workflow at your disposal to boost your productivity and accelerate your journey toward breakthrough results.

Who should join?

Anyone who has familiarity with Python and PyTorch or similar framework and an interest in managing, versioning, and debugging their machine learning workflow.


Instructor

Carey Phelps

Carey Phelps

Instructor
Founding Product Manager, Weights & Biases

Course access is free for a limited time during the DeepLearning.AI learning platform beta!

Want to learn more about Generative AI?

Keep learning with updates on curated AI news, courses, events, as well as Andrew’s thoughts from DeepLearning.AI!