LEARN GENERATIVE AI

Short Courses

Take your generative AI skills to the next level with short courses from DeepLearning.AI. Our short courses help you learn new skills, tools, and concepts efficiently. Available for free for a limited time.

Just Added

Efficiently Serving LLMs

In collaboration with

Efficiently Serving LLMs

Gain a ground-up understanding of how to serve LLM applications in production.

  • Learn how Large Language Models (LLMs) repeatedly predict the next token, and how techniques like KV caching can greatly speed up text generation.
  • Write code to efficiently serve LLM applications to a large number of users, and examine the tradeoffs between quickly returning the output of the model and serving many users at once.
  • Explore the fundamentals of Low Rank Adapters (LoRA) and see how Predibase builds their LoRAX framework inference server to serve multiple fine-tuned models at once.
Intermediate
>
Travis Addair
Prerequisite recommendation: Intermediate Python knowledge.

All courses

ChatGPT Prompt Engineering for Developers

In collaboration with

ChatGPT Prompt Engineering for Developers

Go beyond the chat box. Use API access to leverage LLMs into your own applications, and learn to build a custom chatbot.

  • Learn prompt engineering best practices for application development
  • Discover new ways to use LLMs, including how to build your own chatbot
  • Gain hands-on practice writing and iterating on prompts using the OpenAI API
Beginner to Advanced
>
Isa Fulford, Andrew Ng
Prerequisite recommendation: Basic Python
Building Systems with the ChatGPT API

In collaboration with

Building Systems with the ChatGPT API

Level up your use of LLMs. Learn to break down complex tasks, automate workflows, chain LLM calls, and get better outputs.

  • Efficiently build multi-step systems using large language models.
  • Learn to split complex tasks into a pipeline of subtasks using multistage prompts.
  • Evaluate your LLM inputs and outputs for safety, accuracy, and relevance.
Beginner to Advanced
>
Isa Fulford, Andrew Ng
Prerequisite recommendation: Basic Python
LangChain for LLM Application Development

In collaboration with

LangChain 🦜🔗

LangChain for LLM Application Development

The framework to take LLMs out of the box. Learn to use LangChain to call LLMs into new environments, and use memories, chains, and agents to take on new and complex tasks.

  • Learn LangChain directly from the creator of the framework, Harrison Chase
  • Apply LLMs to proprietary data to build personal assistants and specialized chatbots
  • Use agents, chained calls, and memories to expand your use of LLMs
Beginner
>
Harrison Chase, Andrew Ng
Prerequisite recommendation: Basic Python
LangChain: Chat with Your Data

In collaboration with

LangChain 🦜🔗

LangChain: Chat with Your Data

Create a chatbot to interface with your private data and documents using LangChain.

  • Learn from LangChain creator, Harrison Chase
  • Utilize 80+ loaders for diverse data sources in LangChain
  • Create a chatbot to interact with your own documents and data
Beginner
>
Harrison Chase
Prerequisite recommendation: Basic Python
Finetuning Large Language Models

In collaboration with

Finetuning Large Language Models

Learn to finetune an LLM in minutes and specialize it to use your own data

  • Master LLM finetuning basics
  • Differentiate finetuning from prompt engineering and know when to use each
  • Gain hands-on experience with real datasets for your projects
Intermediate
>
Sharon Zhou
Prerequisite recommendation: Basic Python
Building Generative AI Applications with Gradio

In collaboration with

Building Generative AI Applications with Gradio

Create and demo machine learning applications quickly. Share your app with the world on Hugging Face Spaces.

  • Rapidly develop ML apps
  • Create image generation, captioning, and text summarization apps
  • Share your apps with teammates and beta testers on Hugging Face Spaces
Beginner
>
Apolinário Passos
Prerequisite recommendation: Basic Python
Evaluating and Debugging Generative AI Models Using Weights and Biases

In collaboration with

Evaluating and Debugging Generative AI Models Using Weights and Biases

Learn MLOps tools for managing, versioning, debugging and experimenting in your ML workflow.

  • Learn to evaluate LLM and image models with platform-independent tools
  • Instrument training notebooks for tracking, versioning, and logging
  • Monitor and trace LLM behavior in complex interactions over time
Intermediate
>
Carey Phelps
Prerequisite recommendation: Familiarity with Python. Helpful to be familiar with PyTorch or similar framework
How Diffusion Models Work

How Diffusion Models Work

Learn and build diffusion models from the ground up. Start with an image of pure noise, and arrive at a final image, learning and building intuition at each step along the way.

  • Understand diffusion models in use today
  • Build your own diffusion model, and learn to train it
  • Implement algorithms to speed up sampling 10x
Intermediate
>
Sharon Zhou
Prerequisite recommendation: Python, Tensorflow, or Pytorch
Efficiently Serving LLMs

In collaboration with

Efficiently Serving LLMs

Gain a ground-up understanding of how to serve LLM applications in production.

  • Learn how Large Language Models (LLMs) repeatedly predict the next token, and how techniques like KV caching can greatly speed up text generation.
  • Write code to efficiently serve LLM applications to a large number of users, and examine the tradeoffs between quickly returning the output of the model and serving many users at once.
  • Explore the fundamentals of Low Rank Adapters (LoRA) and see how Predibase builds their LoRAX framework inference server to serve multiple fine-tuned models at once.
Intermediate
>
Travis Addair
Prerequisite recommendation: Intermediate Python knowledge.
Knowledge Graphs for RAG

In collaboration with

Knowledge Graphs for RAG

Learn how to build and use knowledge graph systems to improve your retrieval augmented generation applications.

  • Use Neo4j’s query language Cypher to manage and retrieve data stored in knowledge graphs.
  • Write knowledge graph queries that find and format text data to provide more relevant context to LLMs for Retrieval Augmented Generation.
  • Build a question-answering system using Neo4j and LangChain to chat with a knowledge graph of structured text documents.
Intermediate
>
Andreas Kollegger
Prerequisite recommendation: We recommend familiarity with LangChain or taking "LangChain: Chat with Your Data" prior to this course.
Open Source Models with Hugging Face

In collaboration with

Open Source Models with Hugging Face

Learn how to easily build AI applications using open source models and Hugging Face tools.

  • Find and filter open source models on Hugging Face Hub based on task, rankings, and memory requirements.
  • Write just a few lines of code using the transformers library to perform text, audio, image, and multimodal tasks.Easily share your AI apps with a user-friendly interface or via API and run them on the cloud using Gradio and Hugging Face Spaces.
Beginner
>
Maria Khalusova, Marc Sun, Younes Belkada
Prerequisite recommendation: This is a beginner-friendly course.
Prompt Engineering with Llama 2

In collaboration with

Prompt Engineering with Llama 2

Learn best practices for prompting and selecting among Meta Llama 2 models.

  • Learn best practices specific to prompting Llama 2 models.
    Interact with Meta Llama 2 Chat, Code Llama, and Llama Guard models.
    See how you can build safe, responsible AI applications using the Llama Guard model.
Beginner
>
Amit Sangani
Prerequisite recommendation: This is a beginner-friendly course.
Serverless LLM apps with Amazon Bedrock

In collaboration with

Serverless LLM apps with Amazon Bedrock

Learn how to deploy a large language model-based application into production using serverless technology.

  • Learn how to prompt and customize your LLM responses using Amazon Bedrock.
    Summarize audio conversations by first transcribing an audio file and passing the transcription to an LLM.
    Deploy an event-driven audio summarizer that runs as new audio files are uploaded; using a serverless architecture.
Intermediate
>
Mike Chambers
Prerequisite recommendation: Familiarity with Python and AWS services.
Building Applications with Vector Databases

In collaboration with

Building Applications with Vector Databases

Learn to build six applications powered by vector databases: semantic search, retrieval augmented generation (RAG), anomaly detection, hybrid search, image similarity search, and recommender systems, each using a different dataset.

  • Learn to create six exciting applications of vector databases and implement them using Pinecone.
  • Build a hybrid search app that combines both text and images for improved multimodal search results.
  • Learn how to build an app that measures and ranks facial similarity.
Beginner
>
Tim Tully
Prerequisite recommendation: Basic Python, machine learning, and large language models knowledge.
Automated Testing for LLMOps

In collaboration with

Automated Testing for LLMOps

Learn how to create an automated continuous integration (CI) pipeline to evaluate your LLM applications on every change, for faster, safer, and more efficient application development.

  • Learn how LLM-based testing differs from traditional software testing and implement rules-based testing to assess your LLM application.
  • Build model-graded evaluations to test your LLM application using an evaluation LLM.
  • Automate your evals (rules-based and model-graded) using continuous integration tools from CircleCI.
Intermediate
>
Rob Zuber
Prerequisite recommendation: Basic Python knowledge and familiarity with building LLM-based applications.
LLMOps

In collaboration with

LLMOps

Learn LLMOps best practices as you design and automate the steps to tune an LLM for a specific task and deploy it as a callable API. In the course, you'll tune an LLM to act as a question-answering coding expert. You can apply the methods learned here to tune your own LLM for other use cases.

  • Adapt an open source pipeline that applies supervised fine-tuning on an LLM to better answer user questions.
  • Learn best practices, including versioning your data and your models, and pre-process large datasets inside a data warehouse.
  • Learn responsible AI by outputting safety scores on sub-categories of harmful content.
Beginner
>
Erwin Huizenga
Prerequisite recommendation: Basic Python
Build LLM Apps with LangChain.js

In collaboration with

Build LLM Apps with LangChain.js

Expand your toolkits with LangChain.js, a popular JavaScript framework for building with LLMs, and get useful concepts for creating powerful, context-aware applications.

  • Understand the fundamentals of using LangChain’s JavaScript library to orchestrate and chain different modules together.
  • Learn the basics of loading and preparing data to provide as context to effectively customize LLM generations.
  • Learn techniques to retrieve and present data to the LLM in useful ways for a conversational retrieval chain.
Intermediate
>
Jacob Lee
Prerequisite recommendation: Intermediate JavaScript knowledge
Advanced Retrieval for AI with Chroma

In collaboration with

Advanced Retrieval for AI with Chroma

Learn advanced retrieval techniques to improve the relevancy of retrieved results.

  • Learn to recognize when queries are producing poor results.
  • Learn to use a large language model (LLM) to improve your queries.
  • Learn to fine-tune your embeddings with user feedback.
Intermediate
>
Anton Troynikov
Prerequisite recommendation: Intermediate Python
Reinforcement Learning from Human Feedback

In collaboration with

Reinforcement Learning from Human Feedback

A conceptual and hands-on introduction to tuning and evaluating large language models (LLMs) using Reinforcement Learning from Human Feedback.

  • Get a conceptual understanding of Reinforcement Learning from Human Feedback (RLHF), as well as the datasets needed for this technique
  • Fine-tune the Llama 2 model using RLHF with the open source Google Cloud Pipeline Components Library
  • Evaluate tuned model performance against the base model with evaluation methods
Intermediate
>
Nikita Namjoshi
Prerequisite recommendation: Intermediate Python
Building and Evaluating Advanced RAG Applications

In collaboration with

Building and Evaluating Advanced RAG Applications

Learn how to efficiently bring Retrieval Augmented Generation (RAG) into production by enhancing retrieval techniques and mastering evaluation metrics.

  • Learn methods like sentence-window retrieval and auto-merging retrieval, improving your RAG pipeline’s performance beyond the baseline.
  • Learn evaluation best practices to streamline your process, and iteratively build a robust system.
  • Dive into the RAG triad for evaluating the relevance and truthfulness of an LLM’s response:Context Relevance, Groundedness, and Answer Relevance.
Beginner
>
Jerry Liu, Anupam Datta
Prerequisite recommendation: Basic Python
Quality and Safety for LLM Applications

In collaboration with

Quality and Safety for LLM Applications

Learn how to evaluate the safety and security of your LLM applications and protect against potential risks.

  • Monitor and enhance security measures over time to safeguard your LLM applications.
  • Detect and prevent critical security threats like hallucinations, jailbreaks, and data leakage.
  • Explore real-world scenarios to prepare for potential risks and vulnerabilities.
Beginner
>
Bernease Herman
Prerequisite recommendation: Basic Python
Vector Databases: from Embeddings to Applications

In collaboration with

Vector Databases: from Embeddings to Applications

Design and execute real-world applications of vector databases.

  • Build efficient, practical applications, including hybrid and multilingual searches, for diverse industries.
  • Understand vector databases and use them to develop GenAI applications without needing to train or fine-tune an LLM yourself.
  • Learn to discern when best to apply a vector database to your application.
Intermediate
>
Sebastian Witalec
Prerequisite recommendation: Basic Python. Familiarity with data structures is useful but not required.
Functions, Tools and Agents with LangChain

In collaboration with

Functions, Tools and Agents with LangChain

Learn and apply the new capabilities of LLMs as a developer tool.

  • Learn about the most recent advancements in LLM APIs.
  • Use LangChain Expression Language (LCEL), a new syntax to compose and customize chains and agents faster.
  • Apply these new capabilities by building up a conversational agent.
Intermediate
>
Harrison Chase
Prerequisite recommendation: Basic Python and familiarity with writing prompts for Large Language Models.
Pair Programming with a Large Language Model

In collaboration with

Pair Programming with a Large Language Model

Learn how to effectively prompt an LLM to help you improve, debug, understand, and document your code

  • Use LLMs to simplify your code and become a more productive software engineer
  • Reduce technical debt by explaining and documenting a complex existing code base
  • Get free access to the PaLM API for use throughout the course
Beginner
>
Laurence Moroney
Prerequisite recommendation: Basic Python
Understanding and Applying Text Embeddings

In collaboration with

Understanding and Applying Text Embeddings

Learn how to accelerate the application development process with text embeddings

  • Employ text embeddings for sentence and paragraph meaning
  • Use text embeddings for clustering, classification, and outlier detection
  • Build a question-answering system with Google Cloud’s Vertex AI
Beginner
>
Nikita Namjoshi, Andrew Ng
Prerequisite recommendation: Basic Python
How Business Thinkers Can Start Building AI Plugins With Semantic Kernel

In collaboration with

How Business Thinkers Can Start Building AI Plugins With Semantic Kernel

Learn Microsoft’s open source orchestrator, Semantic Kernel, and develop business applications using LLMs.

  • Learn Microsoft’s open-source orchestrator, the Semantic Kernel
  • Develop your business planning and analysis skills while leveraging AI tools
  • Advance your skills in LLMs by using memories, connectors, chains, and more
Beginner
>
John Maeda
Prerequisite recommendation: Basic Python

Interested in more GenAI courses?

Generative AI with Large Language Models (LLMs)

Learn the fundamentals of how generative AI works, and how to deploy it in real-world applications.
In Collaboration with

Generative AI offers many opportunities for AI engineers to build, in minutes or hours, powerful applications that previously would have taken days or weeks. I'm excited about sharing these best practices to enable many more people to take advantage of these revolutionary new capabilities.

- Andrew Ng