LEARN GENERATIVE AI

Short Courses

Learn from industry-leading experts and get hands-on experience with the latest generative AI tools and techniques in an hour or less. Explore prompt engineering, AI agents, retrieval augmented generation, and more.

Most Popular

ChatGPT Prompt Engineering for Developers

In collaboration with

ChatGPT Prompt Engineering for Developers

Go beyond the chat box. Use API access to leverage LLMs into your own applications, and learn to build a custom chatbot.

  • Learn prompt engineering best practices for application development
  • Discover new ways to use LLMs, including how to build your own chatbot
  • Gain hands-on practice writing and iterating on prompts using the OpenAI API
Beginner to Advanced
>
Isa Fulford, Andrew Ng
Prerequisite recommendation: Basic Python

All courses

Newest
Introduction to On-Device AI

In collaboration with

Introduction to On-Device AI

Deploy AI models from the cloud to smartphones and edge devices.

  • Learn to deploy AI models on edge devices like smartphones, using their local compute power for faster and more secure inference.
  • Explore model conversion by, converting your PyTorch/TensorFlow models for device compatibility, and quantize them to achieve performance gains while reducing model size.
  • Learn about device integration, including runtime dependencies, and how GPU, NPU, and CPU compute unit utilization affect performance.
Beginner
>
Krishna Sridhar
Prerequisite recommendation: Familiarity with Python, as well as PyTorch or TensorFlow is recommended.
Multi AI Agent Systems with crewAI

In collaboration with

Multi AI Agent Systems with crewAI

Automate business workflows with multi-AI agent systems.

  • Exceed the performance of prompting a single LLM by designing and prompting a team of AI agents through natural language.
  • Use an open source library, crewAI, to automate repeatable, multi-step tasks like tailoring a resume to a job description; and automate business processes that are typically done by a group of people, like event planning.
  • By creating a team of AI agents, you can define a specific role, goal, and backstory for each agent, which breaks down complex multi-step tasks and assigns them to agents that are customized to perform those tasks.
Beginner
>
João Moura
Prerequisite recommendation: If you've taken some prompt engineering courses, have some familiarity with basic coding, and want to incorporate LLMs in your professional work, then this course is designed for you!
Building Multimodal Search and RAG

In collaboration with

Building Multimodal Search and RAG

Build smarter search and RAG applications for multimodal retrieval and generation.

  • Learn how multimodality works by implementing contrastive learning, and see how it can be used to build modality-independent embeddings for seamless any-to-any retrieval.
  • Build multimodal RAG systems that retrieve multimodal context and reason over it to generate more relevant answers.
  • Implement industry applications of multimodal search and build multi-vector recommender systems.
Intermediate
>
Sebastian Witalec
Prerequisite recommendation: Basic Python knowledge, as well as familiarity with RAG is recommended to get the most out of this course.
Building Agentic RAG with LlamaIndex

In collaboration with

Building Agentic RAG with LlamaIndex

Build an agent that can answer complex questions about your documents.

  • Learn how to build an agent that can reason over your documents and answer complex questions.
  • Build a router agent that can help you with Q&A and summarization tasks, and extend it to handle passing arguments to this agent.
  • Design a research agent that handles multi-documents and learn about different ways to debug and control this agent.
Beginner
>
Jerry Liu
Prerequisite recommendation: Basic Python experience is recommended.
Quantization in Depth

In collaboration with

Quantization in Depth

Customize model compression with advanced quantization techniques.

  • Try out different variants of Linear Quantization, including symmetric vs. asymmetric mode, and different granularities like per tensor, per channel, and per group quantization.
  • Build a general-purpose quantizer in Pytorch that can quantize the dense layers of any open source model for up to 4x compression on dense layers.
  • Implement weights packing to pack four 2-bit weights into a single 8-bit integer.
Intermediate
>
Marc Sun, Younes Belkada
Prerequisite recommendation: This course builds on the concepts introduced in the Quantization Fundamentals course. Taking it before Quantization in Depth is recommended.
Prompt Engineering for Vision Models

In collaboration with

Prompt Engineering for Vision Models

Prompting and fine-tuning computer vision and image generation models

  • Prompt vision models with text, coordinates, and bounding boxes, and tune hyper-parameters like guidance scale, strength, and number of inference steps.
  • Replace parts of an image with generated content with in-painting, a technique that combines object detection, image segmentation, and image generation.
  • Fine-tune a diffusion model to have even more control over your image generation to create specific images, including your own, rather than generically generated images.
Beginner
>
Abby Morgan, Jacques Verré, Caleb Kaiser
Prerequisite recommendation: Python experience is recommended.
Getting Started With Mistral

In collaboration with

Getting Started With Mistral

Learn how and when to use Mistral AI’s leading LLMs

  • Explore Mistral’s three open source models (Mistral 7B, Mixtral 8x7B, and the latest Mixtral 8x22B), and three commercial models (small, medium, and large), which Mistral provides access to via web interface and API calls.
  • Leverage Mistral’s JSON mode to generate LLM responses in a structured JSON format, enabling integration of LLM outputs into larger software applications.
  • Use Mistral’s API to call user-defined Python functions for tasks like web searches or retrieving text from databases, enhancing the LLM’s ability to find relevant information to answer user queries.
Beginner
>
Sophia Yang
Prerequisite recommendation: This is a beginner-friendly course.
Quantization Fundamentals with Hugging Face

In collaboration with

Quantization Fundamentals with Hugging Face

Learn how to quantize any open source model.

  • Learn how to compress models with the Hugging Face Transformers library and the Quanto library.
  • Learn about linear quantization, a simple yet effective method for compressing models.
  • Practice quantizing open source multimodal and language models.
Beginner
>
Younes Belkada, Marc Sun
Prerequisite recommendation: A basic understanding of machine learning concepts and some experience with PyTorch.
Preprocessing Unstructured Data for LLM Applications

In collaboration with

Preprocessing Unstructured Data for LLM Applications

Improve your RAG system to retrieve diverse data types

  • Learn to extract and normalize content from a wide variety of document types, such as PDFs, PowerPoints, Word, and HTML files, tables, and images to expand the information accessible to your LLM.
  • Enrich your content with metadata, enhancing retrieval augmented generation (RAG) results and supporting more nuanced search capabilities.
  • Explore document image analysis techniques like layout detection and vision and table transformers, and learn how to apply these methods to preprocess PDFs, images, and tables.
Beginner
>
Matt Robinson
Prerequisite recommendation: This is a beginner-friendly course.
Red Teaming LLM Applications

In collaboration with

Red Teaming LLM Applications

Learn how to make safer LLM apps through red teaming

  • Learn to identify and evaluate vulnerabilities in large language model (LLM) applications.
  • Apply red teaming techniques from cybersecurity to ensure the safety and reliability of your LLM application.
  • Use an open source library from Giskard to help automate LLM red-teaming methods.
Beginner
>
Matteo Dora, Luca Martial
Prerequisite recommendation: Basic Javascript knowledge.
JavaScript RAG Web Apps with LlamaIndex

In collaboration with

JavaScript RAG Web Apps with LlamaIndex

Build a full-stack web application that uses RAG capabilities to chat with your data.

  • Learn how to build a RAG application in JavaScript, and use an intelligent agent that discerns and selects from multiple data sources to answer your queries.
  • Build a full-stack web app with an interactive frontend component that interacts and chats with your data.
  • Learn how to persist your data, enable chatting with your data and make streaming responses possible, all implemented using the create-llama command-line tool.
Beginner
>
Laurie Voss
Prerequisite recommendation: Basic JavaScript knowledge.
Efficiently Serving LLMs

In collaboration with

Efficiently Serving LLMs

Gain a ground-up understanding of how to serve LLM applications in production.

  • Learn how Large Language Models (LLMs) repeatedly predict the next token, and how techniques like KV caching can greatly speed up text generation.
  • Write code to efficiently serve LLM applications to a large number of users, and examine the tradeoffs between quickly returning the output of the model and serving many users at once.
  • Explore the fundamentals of Low Rank Adapters (LoRA) and see how Predibase builds their LoRAX framework inference server to serve multiple fine-tuned models at once.
Intermediate
>
Travis Addair
Prerequisite recommendation: Intermediate Python knowledge.
Knowledge Graphs for RAG

In collaboration with

Knowledge Graphs for RAG

Learn how to build and use knowledge graph systems to improve your retrieval augmented generation applications.

  • Use Neo4j’s query language Cypher to manage and retrieve data stored in knowledge graphs.
  • Write knowledge graph queries that find and format text data to provide more relevant context to LLMs for Retrieval Augmented Generation.
  • Build a question-answering system using Neo4j and LangChain to chat with a knowledge graph of structured text documents.
Intermediate
>
Andreas Kollegger
Prerequisite recommendation: We recommend familiarity with LangChain or taking "LangChain: Chat with Your Data" prior to this course.
Open Source Models with Hugging Face

In collaboration with

Open Source Models with Hugging Face

Learn how to easily build AI applications using open source models and Hugging Face tools.

  • Find and filter open source models on Hugging Face Hub based on task, rankings, and memory requirements.
  • Write just a few lines of code using the transformers library to perform text, audio, image, and multimodal tasks.
  • Easily share your AI apps with a user-friendly interface or via API and run them on the cloud using Gradio and Hugging Face Spaces.
Beginner
>
Maria Khalusova, Marc Sun, Younes Belkada
Prerequisite recommendation: This is a beginner-friendly course.
Prompt Engineering with Llama 2 & 3

In collaboration with

Prompt Engineering with Llama 2 & 3

Learn best practices for prompting and selecting among Meta Llama 2 & 3 models.

  • Learn best practices specific to prompting Llama 2 & 3 models.
  • Interact with Meta Llama 2 Chat, Code Llama, and Llama Guard models.
  • See how you can build safe, responsible AI applications using the Llama Guard model.
Beginner
>
Amit Sangani
Prerequisite recommendation: This is a beginner-friendly course.
Serverless LLM apps with Amazon Bedrock

In collaboration with

Serverless LLM apps with Amazon Bedrock

Learn how to deploy a large language model-based application into production using serverless technology.

  • Learn how to prompt and customize your LLM responses using Amazon Bedrock.
  • Summarize audio conversations by first transcribing an audio file and passing the transcription to an LLM.
  • Deploy an event-driven audio summarizer that runs as new audio files are uploaded using a serverless architecture.
Intermediate
>
Mike Chambers
Prerequisite recommendation: Familiarity with Python and AWS services.
Building Applications with Vector Databases

In collaboration with

Building Applications with Vector Databases

Learn to build six applications powered by vector databases: semantic search, retrieval augmented generation (RAG), anomaly detection, hybrid search, image similarity search, and recommender systems, each using a different dataset.

  • Learn to create six exciting applications of vector databases and implement them using Pinecone.
  • Build a hybrid search app that combines both text and images for improved multimodal search results.
  • Learn how to build an app that measures and ranks facial similarity.
Beginner
>
Tim Tully
Prerequisite recommendation: Basic Python, machine learning, and large language models knowledge.
Automated Testing for LLMOps

In collaboration with

Automated Testing for LLMOps

Learn how to create an automated continuous integration (CI) pipeline to evaluate your LLM applications on every change, for faster, safer, and more efficient application development.

  • Learn how LLM-based testing differs from traditional software testing and implement rules-based testing to assess your LLM application.
  • Build model-graded evaluations to test your LLM application using an evaluation LLM.
  • Automate your evals (rules-based and model-graded) using continuous integration tools from CircleCI.
Intermediate
>
Rob Zuber
Prerequisite recommendation: Basic Python knowledge and familiarity with building LLM-based applications.
LLMOps

In collaboration with

LLMOps

Learn LLMOps best practices as you design and automate the steps to tune an LLM for a specific task and deploy it as a callable API. In the course, you'll tune an LLM to act as a question-answering coding expert. You can apply the methods learned here to tune your own LLM for other use cases.

  • Adapt an open source pipeline that applies supervised fine-tuning on an LLM to better answer user questions.
  • Learn best practices, including versioning your data and your models, and pre-process large datasets inside a data warehouse.
  • Learn responsible AI by outputting safety scores on sub-categories of harmful content.
Beginner
>
Erwin Huizenga
Prerequisite recommendation: Basic Python
Build LLM Apps with LangChain.js

In collaboration with

Build LLM Apps with LangChain.js

Expand your toolkits with LangChain.js, a popular JavaScript framework for building with LLMs, and get useful concepts for creating powerful, context-aware applications.

  • Understand the fundamentals of using LangChain’s JavaScript library to orchestrate and chain different modules together.
  • Learn the basics of loading and preparing data to provide as context to effectively customize LLM generations.
  • Learn techniques to retrieve and present data to the LLM in useful ways for a conversational retrieval chain.
Intermediate
>
Jacob Lee
Prerequisite recommendation: Intermediate JavaScript knowledge
Advanced Retrieval for AI with Chroma

In collaboration with

Advanced Retrieval for AI with Chroma

Learn advanced retrieval techniques to improve the relevancy of retrieved results.

  • Learn to recognize when queries are producing poor results.
  • Learn to use a large language model (LLM) to improve your queries.
  • Learn to fine-tune your embeddings with user feedback.
Intermediate
>
Anton Troynikov
Prerequisite recommendation: Intermediate Python
Reinforcement Learning from Human Feedback

In collaboration with

Reinforcement Learning from Human Feedback

A conceptual and hands-on introduction to tuning and evaluating large language models (LLMs) using Reinforcement Learning from Human Feedback.

  • Get a conceptual understanding of Reinforcement Learning from Human Feedback (RLHF), as well as the datasets needed for this technique
  • Fine-tune the Llama 2 model using RLHF with the open source Google Cloud Pipeline Components Library
  • Evaluate tuned model performance against the base model with evaluation methods
Intermediate
>
Nikita Namjoshi
Prerequisite recommendation: Intermediate Python
Building and Evaluating Advanced RAG Applications

In collaboration with

Building and Evaluating Advanced RAG Applications

Learn how to efficiently bring Retrieval Augmented Generation (RAG) into production by enhancing retrieval techniques and mastering evaluation metrics.

  • Learn methods like sentence-window retrieval and auto-merging retrieval, improving your RAG pipeline’s performance beyond the baseline.
  • Learn evaluation best practices to streamline your process, and iteratively build a robust system.
  • Dive into the RAG triad for evaluating the relevance and truthfulness of an LLM’s response:Context Relevance, Groundedness, and Answer Relevance.
Beginner
>
Jerry Liu, Anupam Datta
Prerequisite recommendation: Basic Python
Quality and Safety for LLM Applications

In collaboration with

Quality and Safety for LLM Applications

Learn how to evaluate the safety and security of your LLM applications and protect against potential risks.

  • Monitor and enhance security measures over time to safeguard your LLM applications.
  • Detect and prevent critical security threats like hallucinations, jailbreaks, and data leakage.
  • Explore real-world scenarios to prepare for potential risks and vulnerabilities.
Beginner
>
Bernease Herman
Prerequisite recommendation: Basic Python
Vector Databases: from Embeddings to Applications

In collaboration with

Vector Databases: from Embeddings to Applications

Design and execute real-world applications of vector databases.

  • Build efficient, practical applications, including hybrid and multilingual searches, for diverse industries.
  • Understand vector databases and use them to develop GenAI applications without needing to train or fine-tune an LLM yourself.
  • Learn to discern when best to apply a vector database to your application.
Intermediate
>
Sebastian Witalec
Prerequisite recommendation: Basic Python. Familiarity with data structures is useful but not required.
Functions, Tools and Agents with LangChain

In collaboration with

Functions, Tools and Agents with LangChain

Learn and apply the new capabilities of LLMs as a developer tool.

  • Learn about the most recent advancements in LLM APIs.
  • Use LangChain Expression Language (LCEL), a new syntax to compose and customize chains and agents faster.
  • Apply these new capabilities by building up a conversational agent.
Intermediate
>
Harrison Chase
Prerequisite recommendation: Basic Python and familiarity with writing prompts for Large Language Models.
Pair Programming with a Large Language Model

In collaboration with

Pair Programming with a Large Language Model

Learn how to effectively prompt an LLM to help you improve, debug, understand, and document your code

  • Use LLMs to simplify your code and become a more productive software engineer
  • Reduce technical debt by explaining and documenting a complex existing code base
  • Get free access to the PaLM API for use throughout the course
Beginner
>
Laurence Moroney
Prerequisite recommendation: Basic Python
Understanding and Applying Text Embeddings

In collaboration with

Understanding and Applying Text Embeddings

Learn how to accelerate the application development process with text embeddings

  • Employ text embeddings for sentence and paragraph meaning
  • Use text embeddings for clustering, classification, and outlier detection
  • Build a question-answering system with Google Cloud’s Vertex AI
Beginner
>
Nikita Namjoshi, Andrew Ng
Prerequisite recommendation: Basic Python
How Business Thinkers Can Start Building AI Plugins With Semantic Kernel

In collaboration with

How Business Thinkers Can Start Building AI Plugins With Semantic Kernel

Learn Microsoft’s open source orchestrator, Semantic Kernel, and develop business applications using LLMs.

  • Learn Microsoft’s open-source orchestrator, the Semantic Kernel
  • Develop your business planning and analysis skills while leveraging AI tools
  • Advance your skills in LLMs by using memories, connectors, chains, and more
Beginner
>
John Maeda
Prerequisite recommendation: Basic Python
Finetuning Large Language Models

In collaboration with

Finetuning Large Language Models

Learn to finetune an LLM in minutes and specialize it to use your own data

  • Master LLM finetuning basics
  • Differentiate finetuning from prompt engineering and know when to use each
  • Gain hands-on experience with real datasets for your projects
Intermediate
>
Sharon Zhou
Prerequisite recommendation: Basic Python
Evaluating and Debugging Generative AI Models Using Weights and Biases

In collaboration with

Evaluating and Debugging Generative AI Models Using Weights and Biases

Learn MLOps tools for managing, versioning, debugging and experimenting in your ML workflow.

  • Learn to evaluate LLM and image models with platform-independent tools
  • Instrument training notebooks for tracking, versioning, and logging
  • Monitor and trace LLM behavior in complex interactions over time
Intermediate
>
Carey Phelps
Prerequisite recommendation: Familiarity with Python. Helpful to be familiar with PyTorch or similar framework
Building Generative AI Applications with Gradio

In collaboration with

Building Generative AI Applications with Gradio

Create and demo machine learning applications quickly. Share your app with the world on Hugging Face Spaces.

  • Rapidly develop ML apps
  • Create image generation, captioning, and text summarization apps
  • Share your apps with teammates and beta testers on Hugging Face Spaces
Beginner
>
Apolinário Passos
Prerequisite recommendation: Basic Python
LangChain: Chat with Your Data

In collaboration with

LangChain 🦜🔗

LangChain: Chat with Your Data

Create a chatbot to interface with your private data and documents using LangChain.

  • Learn from LangChain creator, Harrison Chase
  • Utilize 80+ loaders for diverse data sources in LangChain
  • Create a chatbot to interact with your own documents and data
Beginner
>
Harrison Chase
Prerequisite recommendation: Basic Python
How Diffusion Models Work

How Diffusion Models Work

Learn and build diffusion models from the ground up. Start with an image of pure noise, and arrive at a final image, learning and building intuition at each step along the way.

  • Understand diffusion models in use today
  • Build your own diffusion model, and learn to train it
  • Implement algorithms to speed up sampling 10x
Intermediate
>
Sharon Zhou
Prerequisite recommendation: Python, Tensorflow, or Pytorch
LangChain for LLM Application Development

In collaboration with

LangChain 🦜🔗

LangChain for LLM Application Development

The framework to take LLMs out of the box. Learn to use LangChain to call LLMs into new environments, and use memories, chains, and agents to take on new and complex tasks.

  • Learn LangChain directly from the creator of the framework, Harrison Chase
  • Apply LLMs to proprietary data to build personal assistants and specialized chatbots
  • Use agents, chained calls, and memories to expand your use of LLMs
Beginner
>
Harrison Chase, Andrew Ng
Prerequisite recommendation: Basic Python
Building Systems with the ChatGPT API

In collaboration with

Building Systems with the ChatGPT API

Level up your use of LLMs. Learn to break down complex tasks, automate workflows, chain LLM calls, and get better outputs.

  • Efficiently build multi-step systems using large language models.
  • Learn to split complex tasks into a pipeline of subtasks using multistage prompts.
  • Evaluate your LLM inputs and outputs for safety, accuracy, and relevance.
Beginner to Advanced
>
Isa Fulford, Andrew Ng
Prerequisite recommendation: Basic Python
ChatGPT Prompt Engineering for Developers

In collaboration with

ChatGPT Prompt Engineering for Developers

Go beyond the chat box. Use API access to leverage LLMs into your own applications, and learn to build a custom chatbot.

  • Learn prompt engineering best practices for application development
  • Discover new ways to use LLMs, including how to build your own chatbot
  • Gain hands-on practice writing and iterating on prompts using the OpenAI API
Beginner to Advanced
>
Isa Fulford, Andrew Ng
Prerequisite recommendation: Basic Python

Interested in more GenAI courses?

Generative AI with Large Language Models (LLMs)

Learn the fundamentals of how generative AI works, and how to deploy it in real-world applications.
In Collaboration with

Generative AI offers many opportunities for AI engineers to build, in minutes or hours, powerful applications that previously would have taken days or weeks. I'm excited about sharing these best practices to enable many more people to take advantage of these revolutionary new capabilities.

- Andrew Ng