Course Information
- Available
- *The delivery and distribution of the certificate are subject to the policies and arrangements of the course provider.
Course Overview
[EN] LLM Fine-Tuning and Reinforcement Learning with SFT, LoRA, DPO, and GRPO Custom Data HuggingFace
In this course, you will step into the world of Large Language Models (LLMs) and learn both fundamental and advanced end-to-end optimization methods. You’ll begin with the SFT (Supervised Fine-Tuning) approach, where you’ll discover how to properly prepare your data and create customized datasets using tokenizers and data collators through practical examples. During the SFT process, you’ll learn the key techniques for making large models lighter and more efficient with LoRA (Low-Rank Adaptation) and quantization, and explore step by step how to integrate them into your projects.
After solidifying the basics of SFT, we will move on to DPO (Direct Preference Optimization). DPO allows you to obtain user-focused results by directly reflecting user feedback in the model. You’ll learn how to format your data for this method, how to design a reward mechanism, and how to share models trained on popular platforms such as Hugging Face. Additionally, you’ll gain a deeper understanding of how data collators work in DPO processes, learning practical techniques for preparing and transforming datasets in various scenarios.
The most significant phase of the course is GRPO (Group Relative Policy Optimization), which has been gaining popularity for producing strong results. With GRPO, you will learn methods to optimize model behavior not only at the individual level but also within communities or across different user groups. This makes it more systematic and effective for large language models to serve diverse audiences or purposes. In this course, you’ll learn the fundamental principles of GRPO, and then solidify your knowledge by applying this technique with real-world datasets.
Throughout the training, we will cover key topics—LoRA, quantization, SFT, DPO, and especially GRPO—together, supporting each topic with project-oriented applications. By the end of this course, you will be fully equipped to manage every stage with confidence, from end-to-end data preparation to fine-tuning and group-based policy optimization. Developing modern and competitive LLM solutions that focus on both performance and user satisfaction in your own projects will become much easier.
Course Content
- 6 section(s)
- 43 lecture(s)
- Section 1 Introduction
- Section 2 Quantization, LoRA, SFT, Data Collator, Data Preparation…
- Section 3 Adding New Tokens and Creating Templates for the Tokenizer
- Section 4 DPO (Direct Preference Optimization)
- Section 5 GRPO (Group Relative Policy Optimization) Reinforcement Learning
- Section 6 BONUS_New_GRPO_Notebook
What You’ll Learn
- You will grasp the core principles of Large Language Models (LLMs) and the overall structure behind their training processes.
- You will learn the differences between base models and instruct models, as well as the methods for preparing data for each.
- You’ll learn data preprocessing techniques along with essential tips, how to identify special tokens required by models, understanding data formats, and methods
- You’ll gain practical, hands-on experience and detailed knowledge of how LoRA and Data Collator work.
- You’ll gain a detailed understanding of crucial hyperparameters used in training, including their purpose and how they function.
- You’ll practically learn, in detail, how trained LoRA matrices are merged with the base model, as well as key considerations and best practices to follow during
- You’ll learn what Direct Preference Optimization (DPO) is, how it works, the expected data format, and the specific scenarios in which it’s used.
- You’ll learn key considerations when preparing data for DPO, as well as understanding how the DPO data collator functions.
- You’ll learn about the specific hyperparameters used in DPO training, their roles, and how they function.
- You’ll learn how to upload your trained model to platforms like Hugging Face and manage hyperparameters effectively after training.
- You’ll learn in detail how Group Relative Policy Optimization (GRPO), a reinforcement learning method, works, including an in-depth understanding of its learnin
- You’ll learn how to prepare data specifically for Group Relative Policy Optimization (GRPO).
- You’ll learn how to create reward functions—the most critical aspect of Group Relative Policy Optimization (GRPO)—through various practical reward function exam
- In what format should data be provided to GRPO reward functions, and how can we process this data within the functions? You’ll learn these details thoroughly.
- You’ll learn how to define rewards within functions and establish clear reward templates for GRPO.
- You’ll practically learn numerous details, such as extracting reward-worthy parts from raw responses and defining rewards based on these extracted segments.
- You’ll learn how to transform an Instruct model into one capable of generating “Chain of Thought” reasoning through GRPO (Group Relative Policy Optimization).
Reviews
-
MMridul Jain
great course. thanks
-
NNafi Ahmed
This course could have definitely explained the rl techniques in more detail (theory first then tie it to the implementation).
-
BBruce Jung
As a non-native English speaker from Korea, I relied on translation tools for the subtitles to follow along with the lectures. (And full disclosure, I'm using a translator to write this review as well!) After being unemployed for a while, I got some amazing news a few days ago: I officially landed a job! My new role will be focused on fine-tuning, and I can honestly say this course was a game-changer during my interviews. I start this coming Monday, and I'm confident that everything I learned here will be incredibly valuable in my day-to-day work. I'm actually re-watching the entire course from the start, and concepts that were a bit fuzzy the first time around are really clicking into place now. If you're serious about mastering fine-tuning, I can't recommend this course enough.
-
RRomain Barraud
Excellent course. The instructor goes beyond the traditional vanilla training by building a real case where data preparation is needed.