Three New Courses! Check out our short courses on Building Systems with the ChatGPT API, LangChain for LLM Application Development, and How Diffusion Models Work.

Published
Reading time
2 min read
Three New Courses!: Check out our short courses on Building Systems with the ChatGPT API, LangChain for LLM Application Development, and How Diffusion Models Work.

Dear friends,

In April, DeepLearning.AI launched a short course, “ChatGPT Prompt Engineering for Developers,” taught by OpenAI’s Isa Fulford and me.

I’m thrilled to announce three more short courses, available today:

  • Building Systems with the ChatGPT APItaught by returning instructor Isa Fulford and me: This course goes beyond writing individual prompts and shows you how to break down a complex task — such as building a customer-service assistant system — into simpler tasks that you can accomplish via multiple API calls to a large language model (LLM). You’ll also learn how to check LLM outputs for safety and accuracy and how to systematically evaluate the quality of an LLM’s output to drive iterative improvements. You’ll come away with a deeper understanding of how LLMs work (including tokenization and how the chat format works) and how this affects your applications, and gain a solid foundation for building applications using LLMs.
  • LangChain for LLM Application Development taught by LangChain CEO Harrison Chase and me: LangChain is a powerful open-source tool for building applications using LLMs. Complex applications — for example, a QA (Question Answering) system to answer queries about a text document — require prompting an LLM multiple times, parsing the output to feed to downstream prompts, and so on; thus, there’s a lot of “glue” code needed. You’ll learn how to use LangChain’s tools to make these operations easy. We also discuss the cutting-edge (and experimental) agents framework for using an LLM as a reasoning engine that can decide for itself what steps to take next, such as when to call an external subroutine.
  • How Diffusion Models Work taught by Lamini CEO Sharon Zhou: Diffusion models enable Midjourney, DALL·E 2, and Stable Diffusion to generate beautiful images from a text prompt. This technical course walks you through the details of how they work, including how to (i) add noise to training images to go from image to pure noise, (ii) train a U-Net neural network to estimate the noise so as to subtract it off, (iii) add input context so that you can tell the network what to generate, and (iv) use the DDIM technique to significantly speed up inference. You’ll go through code to generate 16x16-pixel sprites (similar to characters in 8-bit video games). By the end, you’ll understand how diffusion models work and how to adapt them to applications you want to build. You’ll also have code that you can use to generate your own sprites!

The first two courses are appropriate for anyone who has basic familiarity with Python. The third is more advanced and additionally assumes familiarity with implementing and training neural networks.

Each of these courses can be completed in around 1 to 1.5 hours, and I believe they will be a worthy investment of your time. I hope you will check them out, and — if you haven’t yet— join the fast-growing community of developers who are building applications using Generative AI!

Keep learning,

Andrew

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox