docs/source/rapidfire_integration.md
RapidFire AI is an open-source experiment execution framework that integrates with TRL to turn "train one configuration at a time" into real-time, side-by-side comparison of many configurations on the same GPU(s) — so you can iterate on hyperparameters, LoRA settings, prompt schemes, and ablations 16–24× faster with no extra hardware.
Links: GitHub · Docs · Try in Colab
When fine-tuning or post-training with TRL, you typically need to:
| Scenario: comparing N training configs on the same GPU(s) | TRL alone | TRL + RapidFire AI |
|---|---|---|
| Training strategy | Run N configs sequentially | Run N configs concurrently |
| When can you compare configs? | After all runs finish | Live, from the first chunk |
| Stop losers / clone winners mid-training | No | Yes (Interactive Control Operations) |
RapidFire AI employs adaptive chunk-based scheduling:
GPU Timeline (Single GPU):
Chunk 1: [Config A] → [Config B] → [Config C] → [Config D]
Chunk 2: [Config A] → [Config B] → [Config C] → [Config D]
Chunk 3: [Config A] → [Config B] → [Config C] → [Config D]
This enables:
pip install rapidfireai
Once installed, authenticate with Hugging Face and initialize RapidFire AI:
# Authenticate with Hugging Face
hf auth login --token YOUR_TOKEN
# Workaround for current issue: https://github.com/huggingface/xet-core/issues/527
pip uninstall -y hf-xet
# Initialize RapidFire AI
rapidfireai init
# Start the RapidFire AI server
rapidfireai start
The dashboard will be available at http://localhost:8853 where you can monitor and control experiments in real-time.
Here's a complete example showing how to train multiple SFT configurations concurrently:
from rapidfireai import Experiment
from rapidfireai.automl import List, RFGridSearch, RFModelConfig, RFLoraConfig, RFSFTConfig
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load dataset
dataset = load_dataset("bitext/Bitext-customer-support-llm-chatbot-training-dataset")
train_dataset = dataset["train"].select(range(128)).shuffle(seed=42)
eval_dataset = dataset["train"].select(range(100, 124)).shuffle(seed=42)
# Define data formatting function
def formatting_function(row):
return {
"prompt": [
{"role": "system", "content": "You are a helpful customer support assistant."},
{"role": "user", "content": row["instruction"]},
],
"completion": [
{"role": "assistant", "content": row["response"]}
]
}
# Initialize experiment
experiment = Experiment(experiment_name="sft-customer-support")
# Define multiple LoRA configurations to compare
peft_configs = List([
RFLoraConfig(r=8, lora_alpha=16, lora_dropout=0.1,
target_modules=["q_proj", "v_proj"], bias="none"),
RFLoraConfig(r=32, lora_alpha=64, lora_dropout=0.1,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"], bias="none")
])
# Define multiple training configurations
# 2 base configs × 2 PEFT configs = 4 total training runs
config_set = List([
RFModelConfig(
model_name="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
peft_config=peft_configs,
training_args=RFSFTConfig( # Wraps TRL's SFTConfig
learning_rate=1e-3,
per_device_train_batch_size=4,
max_steps=128,
fp16=True,
),
model_type="causal_lm",
model_kwargs={"device_map": "auto", "torch_dtype": "auto", "use_cache": False},
formatting_func=formatting_function,
),
RFModelConfig(
model_name="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
peft_config=peft_configs,
training_args=RFSFTConfig(
learning_rate=1e-4, # Different learning rate
per_device_train_batch_size=4,
max_steps=128,
fp16=True,
),
model_type="causal_lm",
model_kwargs={"device_map": "auto", "torch_dtype": "auto", "use_cache": False},
formatting_func=formatting_function,
)
])
# Define model creation function
def create_model(model_config):
model = AutoModelForCausalLM.from_pretrained(
model_config["model_name"],
**model_config["model_kwargs"]
)
tokenizer = AutoTokenizer.from_pretrained(model_config["model_name"])
return (model, tokenizer)
# Create grid search over all configurations
config_group = RFGridSearch(configs=config_set, trainer_type="SFT")
# Run all 4 configurations concurrently with chunk-based scheduling
experiment.run_fit(config_group, create_model, train_dataset, eval_dataset,
num_chunks=4, seed=42)
# End experiment
experiment.end()
When you run this example:
http://localhost:8853This delivers 16-24× higher throughput compared to training each configuration sequentially!
Use RFSFTConfig as a drop-in replacement for SFTConfig:
from rapidfireai.automl import RFSFTConfig
training_args = RFSFTConfig(
learning_rate=5e-5,
per_device_train_batch_size=4,
num_train_epochs=3,
max_length = 512,
# ... all other SFTConfig parameters supported
)
Example Notebook: SFT for Customer Support
Use RFDPOConfig as a drop-in replacement for DPOConfig:
from rapidfireai.automl import RFDPOConfig
training_args = RFDPOConfig(
beta=0.1,
loss_type="sigmoid",
max_length=1024,
learning_rate=5e-4,
# ... all other DPOConfig parameters supported
)
Example Notebook: DPO for Preference Alignment
Use RFGRPOConfig as a drop-in replacement for GRPOConfig:
from rapidfireai.automl import RFGRPOConfig
training_args = RFGRPOConfig(
learning_rate=5e-6,
num_generations=8,
max_completion_length=256,
# ... all other GRPOConfig parameters supported
)
Example Notebook: GRPO for Math Reasoning
RapidFire AI divides training data into chunks and alternates between configurations:
GPU Timeline (Single GPU):
Chunk 1: [Config A] → [Config B] → [Config C] → [Config D]
Chunk 2: [Config A] → [Config B] → [Config C] → [Config D]
Chunk 3: [Config A] → [Config B] → [Config C] → [Config D]
...
This approach maximizes GPU utilization and enables early comparison of configurations while maintaining training stability through automatic checkpointing.
Through the RapidFire AI dashboard, you can dynamically control running experiments:
This enables adaptive experimentation where you can stop underperforming configs early and clone promising ones with tweaked hyperparameters.
Use RFGridSearch or RFRandomSearch to automatically generate configuration combinations:
# Grid search: tests all combinations
config_group = RFGridSearch(configs=config_list, trainer_type="SFT")
# Random search: samples N configurations
config_group = RFRandomSearch(configs=config_list, trainer_type="DPO", num_samples=10)
Full support for parameter-efficient fine-tuning:
from rapidfireai.automl import RFLoraConfig
from peft import TaskType
lora_config = RFLoraConfig(
task_type=TaskType.CAUSAL_LM,
r=64,
lora_alpha=64,
lora_dropout=0.1,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
bias="none"
)
Define multiple reward functions for GRPO training:
def correctness_reward(prompts, completions, answer, **kwargs):
"""Reward for correct answers"""
responses = [completion[0]['content'] for completion in completions]
extracted = [extract_answer(r) for r in responses]
return [2.0 if r == a else 0.0 for r, a in zip(extracted, answer)]
def format_reward(completions, **kwargs):
"""Reward for proper formatting"""
import re
pattern = r"<reasoning>.*?</reasoning>\s*<answer>.*?</answer>"
responses = [completion[0]["content"] for completion in completions]
matches = [re.match(pattern, r) for r in responses]
return [0.5 if match else 0.0 for match in matches]
# Use in model config
config = RFModelConfig(
reward_funcs=[correctness_reward, format_reward],
# ... other parameters
)
RapidFire AI automatically detects and utilizes all available GPUs. By default, the scheduler distributes independent configurations across GPUs (data-parallel across configs), so no special setup is required to run N configs on N GPUs concurrently.
For models that do not fit on a single GPU, RapidFire AI also supports Fully Sharded Data Parallel (FSDP) to shard a single configuration across multiple GPUs — see the next section.
When a model is too large for a single GPU, enable FSDP directly through the training args of RFSFTConfig or RFDPOConfig — the same fsdp and fsdp_config fields exposed by Hugging Face TrainingArguments:
from rapidfireai.automl import RFModelConfig, RFSFTConfig, RFLoraConfig
model_config = RFModelConfig(
model_name="meta-llama/Llama-3.1-8B-Instruct",
peft_config=RFLoraConfig(
r=16, lora_alpha=32, lora_dropout=0.05,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
bias="none",
),
training_args=RFSFTConfig(
learning_rate=2e-4,
per_device_train_batch_size=1,
gradient_accumulation_steps=8,
fsdp="full_shard auto_wrap",
fsdp_config={
"sharding_strategy": "FULL_SHARD",
"auto_wrap_policy": "TRANSFORMER_BASED_WRAP",
"backward_prefetch": "backward_pre",
"forward_prefetch": True,
"use_orig_params": False,
"cpu_ram_efficient_loading": True,
"offload_params": True,
"sync_module_states": True,
"limit_all_gathers": True,
},
),
model_type="causal_lm",
model_kwargs={"torch_dtype": "auto"},
)
Key points:
Example Notebooks:
RapidFire AI supports three metric logging backends that can be used individually or together: MLflow (the default for local installs), TensorBoard (the default in Google Colab), and Trackio.
Select one or more backends at server startup with the --tracking-backends flag:
# MLflow only (default on local installs)
rapidfireai start --tracking-backends mlflow
# TensorBoard only
rapidfireai start --tracking-backends tensorboard
# Any combination
rapidfireai start --tracking-backends mlflow tensorboard trackio
Equivalent environment variables are also available:
RF_MLFLOW_ENABLED (default true, or false in Colab)RF_TENSORBOARD_ENABLED (default false, or true in Colab)RF_TRACKIO_ENABLED (default false)All three backends receive the same metrics (loss, evaluation scores, learning rate, etc.) and respect IC Ops run lifecycle events, so you can use, for example, Trackio for lightweight sharing alongside MLflow for a full local dashboard.
RapidFire AI runs on free Google Colab T4 GPUs, with tutorial notebooks for SFT, DPO, GRPO, and RAG / context-engineering workflows. In Colab, TensorBoard is the default tracking backend (MLflow is disabled for simplicity), and the usual rapidfireai init / rapidfireai start commands run directly from notebook cells — no terminal access required.
Get started: RapidFire AI in Google Colab.
The num_chunks parameter controls swap frequency:
# Fewer chunks = less overhead, less frequent comparison
experiment.run_fit(..., num_chunks=2)
# More chunks = more overhead, more frequent comparison
experiment.run_fit(..., num_chunks=16)
Rule of thumb: Start with num_chunks=4 and adjust based on dataset size and number of configurations.
For large models, use quantization:
from transformers import BitsAndBytesConfig
import torch
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
)
model_kwargs = {
"quantization_config": bnb_config,
"device_map": "auto",
}
Based on internal benchmarks comparing sequential vs. RapidFire AI concurrent training:
| Scenario | Sequential Time | RapidFire AI Time | Speedup |
|---|---|---|---|
| 4 configs, 1 GPU | 120 min | 7.5 min | 16× |
| 8 configs, 1 GPU | 240 min | 12 min | 20× |
| 4 configs, 2 GPUs | 60 min | 4 min | 15× |
| 8 configs, 4 GPUs | 60 min | 3 min | 20× |
Benchmarks performed on NVIDIA A100 40GB with TinyLlama-1.1B and Llama-3.2-1B models
For troubleshooting guidance, see the RapidFire AI Troubleshooting Guide.
Learn more about RapidFire AI in their official repository and documentation.