Short Course

Finetuning Large Language Models

In Collaboration With




1 Hour


Sharon Zhou

Free for a limited time

  • Learn the fundamentals of finetuning a large language model (LLM).
  • Understand how finetuning differs from prompt engineering, and when to use both.
  • Get practical experience with real data sets, and how to use techniques for your own projects.

What you’ll learn in this course

Join our new short course, Finetuning Large Language Models! Learn from Sharon Zhou, Co-Founder and CEO of Lamini, and instructor for the GANs Specialization and How Diffusion Models Work

When you complete this course, you will be able to:

  • Understand when to apply finetuning on LLMs
  • Prepare your data for finetuning
  • Train and evaluate an LLM on your data

With finetuning, you’re able to take your own data to train the model on it, and update the weights of the neural nets in the LLM, changing the model compared to other methods like prompt engineering and Retrieval Augmented Generation. Finetuning allows the model to learn style, form, and can update the model with new knowledge to improve results.

Who should join?

Learners who want to understand the techniques and applications of finetuning, with Python familiarity, and an understanding of a deep learning framework such as PyTorch.


Sharon Zhou

Sharon Zhou

Co-Founder and CEO of Lamini

Course access is free for a limited time during the DeepLearning.AI learning platform beta!

Want to learn more about Generative AI?

Keep learning with updates on curated AI news, courses, events, as well as Andrew’s thoughts from DeepLearning.AI!