
AI Python for Beginners
Learn Python programming with AI assistance. Gain skills writing, testing, and debugging code efficiently, and create real-world AI applications.
Grow your AI career with foundational specializations and skill-specific short courses taught by leaders in the field.
Adapt LLMs for specific tasks and behaviors using post-training techniques like SFT, DPO, and online RL.
Build multimodal and long-context GenAI applications using Llama 4 open models, API, and Llama tools.
Learn how an AI Assistant is built to use and accomplish tasks on computers.
Build LLM apps that can process very long documents using the Jamba model
Learn to build with LLMs by creating a fun interactive game from scratch.
Build agents that collaborate to solve complex business tasks.
Efficiently handle time-varying workloads with serverless agentic workflows and responsible agents built on Amazon Bedrock.
Try out the features of the new Llama 3.2 models to build AI applications with multimodality.
Build faster and more relevant vector search for your LLM applications
Learn Python programming with AI assistance. Gain skills writing, testing, and debugging code efficiently, and create real-world AI applications.
Learn best practices for multimodal prompting using Google’s Gemini model.
Build agentic AI workflows using LangChain's LangGraph and Tavily's agentic search.
Learn prompt engineering for vision models using Stable Diffusion, and advanced techniques like object detection and in-painting.
Explore Mistral's open-source and commercial models, and leverage Mistral's JSON mode to generate structured LLM responses. Use Mistral's API to call user-defined functions for enhanced LLM capabilities.
Learn how to quantize any open-source model. Learn to compress models with the Hugging Face Transformers library and the Quanto library.
Learn how to make safer LLM apps through red teaming. Learn to identify and evaluate vulnerabilities in large language model (LLM) applications.
Understand how LLMs predict the next token and how techniques like KV caching can speed up text generation. Write code to serve LLM applications efficiently to multiple users.
Learn how to easily build AI applications using open-source models and Hugging Face tools. Find and filter open-source models on Hugging Face Hub.
Learn best practices for prompting and selecting among Meta Llama 2 & 3 models. Interact with Meta Llama 2 Chat, Code Llama, and Llama Guard models.
Get an introduction to tuning and evaluating LLMs using Reinforcement Learning from Human Feedback (RLHF) and fine-tune the Llama 2 model.