Short CourseBeginner1 Hour

LLMOps

Instructors: Erwin Huizenga

Collaborator

Google Cloud

Key Learning Outcomes

  • Adapt an open source pipeline that applies supervised fine-tuning on an LLM to better answer user questions.

  • Learn best practices, including versioning your data and your models, and pre-process large datasets inside a data warehouse.

  • Learn responsible AI by outputting safety scores on sub-categories of harmful content.

What you’ll learn in this course

In this course, you’ll go through the LLMOps pipeline of pre-processing training data for supervised instruction tuning, and adapt a supervised tuning pipeline to train and deploy a custom LLM. This is useful in creating an LLM workflow for your specific application. For example, creating a question-answer chatbot tailored to answer Python coding questions, which you’ll do in this course.

Through the course, you’ll go through key steps of creating the LLMOps pipeline:

  • Retrieve and transform training data for supervised fine-tuning of an LLM.
  • Version your data and tuned models to track your tuning experiments. 
  • Configure an open-source supervised tuning pipeline and then execute that pipeline to train and then deploy a tuned LLM.
  • Output and study safety scores to responsibly monitor and filter your LLM application’s behavior.
  • Try out the tuned and deployed LLM yourself in the classroom!
  • Tools you’ll practice with include BigQuery data warehouse, the open-source Kubeflow Pipelines, and Google Cloud. 

Who should join?

Anyone who wants to learn to tune an LLM, and learn to work with and build an LLMOps pipeline.

Instructors

Erwin Huizenga

Erwin Huizenga

Instructor

Developer Advocate for Generative AI on Google Cloud

Missing component Cta
Missing component NewsletterShortCourse