examples/notebooks/grpo_trl_lora_qlora.ipynb
Easily fine-tune Large Language Models (LLMs) or Vision-Language Models (VLMs) with LoRA or QLoRA using the Transformers Reinforcement Learning (TRL) library by Hugging Face and Group Relative Policy Optimization (GRPO) — all within a free Google Colab notebook powered by a T4 GPU.
Thanks to the built-in memory and training optimizations in TRL, including LoRA, quantization, gradient checkpointing, and optimized attention kernels, it is possible to fine-tune a 7B model on a free T4 with a ~7× reduction in memory consumption compared to naive FP16 training.
Learn how to perform GRPO (Group Relative Policy Optimization) with LoRA/QLoRA using TRL.
This table demonstrates how progressively enabling efficiency techniques affects memory usage and training throughput across different hardware configurations.
The techniques range from naive FP16 training to LoRA, quantization, Liger kernels, paged_adamw_8bit, and gradient checkpointing.
| Configuration | LoRA | Quant | Liger | Optimizer | Grad. Ckpt | attn_impl | VRAM (T4) GB | VRAM (A100-40GB) | VRAM (A100-80GB) | Tokens/s (T4) | Tokens/s (A100-40GB) | Tokens/s (A100-80GB) | Status (T4) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Worst (naive FP16) | ❌ | ❌ | ❌ | AdamW | ❌ | eager | OOM | OOM | 62 GB | - | - | 0.06 it/s | ❌ |
| Best (all optimizations) | ✅ | ✅ | ✅ | paged_adamw_8bit | ✅ | sdpa | 9.2 GB | 9.6 GB | 9.6 GB | 0.01 it/s | 0.03 it/s | 0.04 it/s | ✅ |
With all efficiency techniques enabled, memory usage on Colab T4 is reduced by ~7×, making it possible to fine-tune a 7B model on free Colab where naive FP16 training would fail.
A small trade-off in training speed is observed, but the VRAM reduction is the key enabler. For faster training on compatible hardware, vLLM can also be leveraged.
💡 Note: For a fair comparison, the number of generations and the batch size were not changed.
We'll install TRL with the PEFT extra, which ensures all main dependencies such as Transformers and PEFT (a package for parameter-efficient fine-tuning, e.g., LoRA/QLoRA) are included. Additionally, we'll install trackio to log and monitor our experiments, bitsandbytes to enable quantization of LLMs, reducing memory consumption for both inference and training, and liger-kernel for more efficient training.
!pip install -Uq "trl[peft]" bitsandbytes trackio math_verify liger-kernel
Log in to your Hugging Face account to save your fine-tuned model, track your experiment results directly on the Hub or access gated models. You can find your access token on your account settings page.
from huggingface_hub import notebook_login
notebook_login()
In this step, we load the AI-MO/NuminaMath-TIR dataset from the Hugging Face Hub using the datasets library.
This dataset focuses on mathematical reasoning, featuring problems that require step-by-step logical solutions.
By fine-tuning a model that does not yet exhibit strong reasoning capabilities, it can learn to generate structured reasoning steps, enhancing both the model's accuracy and interpretability on math-related tasks.
For efficiency, we'll load only a small portion of the training split:
from datasets import load_dataset
dataset_name = 'AI-MO/NuminaMath-TIR'
train_dataset = load_dataset(dataset_name, split='train[:5%]')
Let's check the structure of the dataset
print(train_dataset)
Let's check one sample:
print(train_dataset[0])
We will adapt our dataset to a conversational format using a custom system prompt, guiding the LLM to generate both step-by-step reasoning and the final answer.
SYSTEM_PROMPT = (
"A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant "
"first thinks about the reasoning process in the mind and then provides the user with the answer. The reasoning "
"process is enclosed strictly within <think> and </think> tags. "
"After closing </think>, the assistant MUST provide the final answer in plain text."
)
def make_conversation(example):
return {
"prompt": [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": example["problem"]},
],
}
train_dataset = train_dataset.map(make_conversation)
Let's take a look at an example:
print(train_dataset[0]['prompt'])
We'll remove the messages and problem columns, as we only need the custom prompt column and solution to verify the generated answer.
train_dataset = train_dataset.remove_columns(['messages', 'problem'])
print(train_dataset)
Below, choose your preferred model. All of the options have been tested on free Colab instances.
💡 Note: Some models, such as Qwen2.5 and Qwen3, are known to have been pretrained on data that improves their math performance. Be cautious when selecting the appropriate model for training to ensure meaningful fine-tuning results (source).
# Select one model below by uncommenting the line you want to use 👇
## Qwen
model_id, output_dir = "Qwen/Qwen2-7B-Instruct", "t4-Qwen2-7B-Instruct-GRPO" # ✅ ~9.2GB VRAM
# model_id, output_dir = "unsloth/qwen3-14b-unsloth-bnb-4bit", "qwen3-14b-unsloth-bnb-4bit-GRPO" # ⚠️ OOM with this config; fits if GRPO params are reduced
# model_id, output_dir = "Qwen/Qwen3-8B", "Qwen3-8B-GRPO" # ✅ ~9.9GB VRAM
# model_id, output_dir = "Qwen/Qwen2.5-7B-Instruct", "Qwen2.5-7B-Instruct-GRPO" # ✅ ~9.2GB VRAM
## Llama
# model_id, output_dir = "meta-llama/Llama-3.2-3B-Instruct", "Llama-3.2-3B-Instruct-GRPO" # ✅ ~5.7GB VRAM
# model_id, output_dir = "meta-llama/Llama-3.1-8B-Instruct", "Llama-3.1-8B-Instruct-GRPO" # ✅ ~9.5GB VRAM
## LFM2.5
# model_id, output_dir = "LiquidAI/LFM2.5-1.2B-Instruct", "LFM2.5-1.2B-Instruct-GRPO" # ✅ ~1.12 GB VRAM
This notebook can be used with two fine-tuning methods. By default, it is set up for QLoRA, which includes quantization using BitsAndBytesConfig. If you prefer to use standard LoRA without quantization, simply comment out the BitsAndBytesConfig configuration (training without quantization consumes more memory).
Let's load the selected model using transformers, configuring QLoRA via bitsandbytes (you can remove it if doing LoRA). We don't need to configure the tokenizer since the trainer takes care of that automatically.
import torch
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
model = AutoModelForCausalLM.from_pretrained(
model_id,
attn_implementation="sdpa", # Change to Flash Attention if GPU has support
dtype="float32", # Change to bfloat16 if GPU has support
quantization_config=BitsAndBytesConfig(
load_in_4bit=True, # Load the model in 4-bit precision to save memory
bnb_4bit_compute_dtype=torch.float16, # Data type used for internal computations in quantization
bnb_4bit_use_double_quant=True, # Use double quantization to improve accuracy
bnb_4bit_quant_type="nf4" # Type of quantization. "nf4" is recommended for recent LLMs
)
)
The following cell defines LoRA (or QLoRA if needed). When training with LoRA/QLoRA, we use a base model (the one selected above) and, instead of modifying its original weights, we fine-tune a LoRA adapter, a lightweight layer that enables efficient and memory-friendly training. The target_modules specify which parts of the model (e.g., attention or projection layers) will be adapted by LoRA during fine-tuning.
from peft import LoraConfig
# You may need to update `target_modules` depending on the architecture of your chosen model.
# For example, different LLMs might have different attention/projection layer names.
peft_config = LoraConfig(
r=32,
lora_alpha=32,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj",],
)
GRPO requires reward functions to guide the learning process. For convenience, we can directly load pre-defined rewards from trl.rewards, which already includes a collection of ready-to-use rewards.
If you want to create your own custom reward functions to teach the model, a reward function is simply a Python function that takes the generated completions and returns a list of floats. For example, the following function, which we use in this notebook, rewards completions that correctly follow the <think> format:
def think_format_reward(completions: list[list[dict[str, str]]], **kwargs) -> list[float]:
pattern = r"^<think>(?!.*<think>)(.*?)</think>.*$"
completion_contents = [completion[0]["content"] for completion in completions]
matches = [re.match(pattern, content, re.DOTALL | re.MULTILINE) for content in completion_contents]
return [1.0 if match else 0.0 for match in matches]
In this notebook, we will use both think_format_reward, which rewards completions that correctly follow the <think> format, and reasoning_accuracy_reward, which evaluates the correctness of the model's solution to the mathematical problem. Together, these rewards guide the model to generate structured reasoning while producing accurate answers.
from trl.rewards import think_format_reward, reasoning_accuracy_reward
We'll configure GRPO using GRPOConfig, keeping the parameters minimal so that the training can run on a free Colab instance. You can adjust these settings if you have access to more resources. For a complete list of available parameters and their descriptions, refer to the TRL GRPOConfig documentation.
💡 Note: TRL supports using vLLM for generation during GRPO training, which can significantly speed up training. However, it increases VRAM usage since a separate vLLM process is active to handle generation. In this notebook, we do not enable vLLM because we are using QLoRA, which updates the quantized vLLM model weights at every step. Enabling vLLM in this setup can cause weight precision issues and make convergence more challenging. The configuration includes the vLLM parameters in case you want to experiment with it. Learn more about vLLM integration in TRL here.
from trl import GRPOConfig
# Configure training arguments using GRPOConfig
training_args = GRPOConfig(
# Training schedule / optimization
learning_rate=2e-5, # Learning rate for the optimizer
#num_train_epochs=1,
max_steps=500, # Number of dataset passes. For full trainings, use `num_train_epochs` instead
# Parameters that control GRPO training (you can adapt them)
per_device_train_batch_size = 8,
max_completion_length=256, # default: 256 # Max completion length produced during training
num_generations=8, # default: 8 # Number of generations produced during trainig for comparison
# Optimizations
optim = "paged_adamw_8bit", # Optimizer
use_liger_kernel=True, # Enable Liger kernel optimizations for faster training
# Parameters related to reporting and saving
output_dir=output_dir, # Where to save model checkpoints and logs
logging_steps=10, # Log training metrics every N steps
report_to="trackio", # Experiment tracking tool
trackio_space_id=output_dir, # HF Space where the experiment tracking will be saved
log_completions=False, # Return model completions during training
# Hub integration
push_to_hub=True, # Automatically push the trained model to the Hugging Face Hub
# The model will be saved under your Hub account in the repository named `output_dir`
# vLLM params
#use_vllm=False, # Activate vLLM training for faster training
#vllm_mode='colocate',
#vllm_gpu_memory_utilization=0.1,
#vllm_enable_sleep_mode=True
)
Configure the GRPOTrainer by passing the previously defined training_args. To keep memory usage low, we are not using an evaluation dataset, but you can include one if desired. We also provide the reward functions that were imported earlier to guide the training process.
from trl import GRPOTrainer
trainer = GRPOTrainer(
model=model,
reward_funcs=[think_format_reward, reasoning_accuracy_reward],
args=training_args,
train_dataset=train_dataset,
peft_config=peft_config,
)
Show memory stats before training
gpu_stats = torch.cuda.get_device_properties(0)
start_gpu_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)
max_memory = round(gpu_stats.total_memory / 1024 / 1024 / 1024, 3)
print(f"GPU = {gpu_stats.name}. Max memory = {max_memory} GB.")
print(f"{start_gpu_memory} GB of memory reserved.")
And train!
Training on a T4 in Colab with the configuration defined in this notebook takes around 13 hours. If you're just experimenting, you can try the following quicker task (source):
dataset = load_dataset("mlabonne/smoltldr")
# Reward function
ideal_length = 50
def reward_len(completions, **kwargs):
return [-abs(ideal_length - len(completion)) for completion in completions]
trainer_stats = trainer.train()
Show memory stats after training
used_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)
used_memory_for_lora = round(used_memory - start_gpu_memory, 3)
used_percentage = round(used_memory / max_memory * 100, 3)
lora_percentage = round(used_memory_for_lora / max_memory * 100, 3)
print(f"{trainer_stats.metrics['train_runtime']} seconds used for training.")
print(f"{round(trainer_stats.metrics['train_runtime']/60, 2)} minutes used for training.")
print(f"Peak reserved memory = {used_memory} GB.")
print(f"Peak reserved memory for training = {used_memory_for_lora} GB.")
print(f"Peak reserved memory % of max memory = {used_percentage} %.")
print(f"Peak reserved memory for training % of max memory = {lora_percentage} %.")
The training procedure generates both standard training logs and trackio logs, which help us monitor the training progress. Example outputs would look like the following:
In this step, we save the fine-tuned model both locally and to the Hugging Face Hub using the credentials from your account.
trainer.save_model(output_dir)
trainer.push_to_hub(dataset_name=dataset_name)
Now, let's test our fine-tuned model by loading the LoRA/QLoRA adapter and performing inference. We'll start by loading the base model, then attach the adapter to it, creating the final fine-tuned model ready for evaluation.
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
adapter_model = f"sergiopaniego/{output_dir}" # Replace with your HF username or organization
base_model = AutoModelForCausalLM.from_pretrained(model_id, dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)
Let's test with one example from the test set of the dataset
from datasets import load_dataset
dataset_name = 'AI-MO/NuminaMath-TIR'
test_dataset = load_dataset(dataset_name, split='test[:1%]')
test_dataset = test_dataset.map(make_conversation)
test_dataset = test_dataset.remove_columns(['messages', 'problem'])
test_dataset[0]['prompt']
Let's first check what's the output for the base model, without the adapter.
messages = test_dataset[0]['prompt']
text = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False
)
model_inputs = tokenizer([text], return_tensors="pt").to(base_model.device)
generated_ids = base_model.generate(
**model_inputs,
max_new_tokens=256
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):]
# Decode and extract model response
generated_text = tokenizer.decode(output_ids, skip_special_tokens=True)
print(generated_text)
The base model neither produced reasoning traces nor provided a correct answer. Let's now load the fine-tuned model and check its performance.
fine_tuned_model = PeftModel.from_pretrained(base_model, adapter_model)
text = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False
)
model_inputs = tokenizer([text], return_tensors="pt").to(fine_tuned_model.device)
generated_ids = fine_tuned_model.generate(
**model_inputs,
max_new_tokens=256
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):]
# Decode and extract model response
generated_text = tokenizer.decode(output_ids, skip_special_tokens=True)
print(generated_text)
The final answer is correct!
You can use Transformer models with vLLM to serve them in real-world applications. Learn more here.
To serve the model via vLLM, the repository must contain the merged model (base model + LoRA adapter). Therefore, you need to upload it first.
model_merged = fine_tuned_model.merge_and_unload()
save_dir = f"{output_dir}-merged"
model_merged.save_pretrained(save_dir)
tokenizer.save_pretrained(save_dir)
model_merged.push_to_hub(f"sergiopaniego/{output_dir}-merged") # Replace with your HF username or organization
tokenizer.push_to_hub(f"sergiopaniego/{output_dir}-merged") # Replace with your HF username or organization
Use vLLM to run your model and generate text efficiently in real-time. This allows you to test and deploy your fine-tuned models with low latency and high throughput.
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
import torch
llm = LLM(
model=f"sergiopaniego/{output_dir}-merged", # Replace with your HF username or organization
model_impl="transformers", # Select the transformers model implementation
max_model_len=256, # Reduced for efficiency
dtype=torch.float16
)
hf_tokenizer = AutoTokenizer.from_pretrained(f"sergiopaniego/{output_dir}-merged") # Replace with your HF username or organization
messages = test_dataset[0]['prompt']
# Alternatively, use llm.chat()
prompt = hf_tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
outputs = llm.generate(
{"prompt": prompt},
sampling_params=SamplingParams(max_tokens=256),
)
for o in outputs:
generated_text = o.outputs[0].text
print(generated_text)