docs/source/en/hpo_train.md
Hyperparameters like learning rate, batch size, and number of epochs significantly affect training results. [Trainer.hyperparameter_search] finds the best combination by running multiple trials, each with a different set of values, and returning the best one.
Each trial initializes a fresh model with model_init, samples new hyperparameters, runs a full training loop, and reports an objective to the search backend. The backend uses each objective to inform the next trial. After all trials complete, the best hyperparameters are returned in a [~trainer.utils.BestRun].
Start each trial with a fresh model to avoid the previous runs' state. model_init is called at the start of each trial and returns a new model instance, so every trial begins from the same initial weights.
from transformers import AutoModelForCausalLM
def model_init(trial):
return AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-0.6B")
trainer = Trainer(
model_init=model_init,
args=args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
)
Don't pass model= and model_init= together or [Trainer] raises an error.
Create a function that defines the search space. The format depends on the backend. If you don't define a hp_space function, the default
search covers learning_rate, num_train_epochs, and per_device_train_batch_size.
# install one of these hyperparam search backends
pip install optuna
pip install wandb
pip install ray[tune]
Optuna is a lightweight framework for hyperparameter optimization.
def hp_space(trial):
return {
"learning_rate": trial.suggest_float("learning_rate", 1e-6, 1e-4, log=True),
"per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [16, 32, 64, 128]),
}
Ray Tune is a scalable hyperparameter tuning library that can also distribute trials across multiple machines.
from ray import tune
def hp_space(trial):
return {
"learning_rate": tune.loguniform(1e-6, 1e-4),
"per_device_train_batch_size": tune.choice([16, 32, 64, 128]),
}
Weights & Biases is an experiment tracking platform with built-in hyperparameter search. It supports Bayesian, random, and grid search strategies.
def hp_space(trial):
return {
"method": "random",
"metric": {"name": "objective", "goal": "minimize"},
"parameters": {
"learning_rate": {"distribution": "uniform", "min": 1e-6, "max": 1e-4},
"per_device_train_batch_size": {"values": [16, 32, 64, 128]},
},
}
Provide an optional compute_objective function to define the optimization target. It defaults to eval_loss if present, or the sum of all metric values otherwise. Pass an explicit function to avoid relying on this fallback. The search backend optimizes the objective over n_trials runs in a given direction.
def compute_objective(metrics):
return metrics["eval_loss"]
best_run = trainer.hyperparameter_search(
hp_space=hp_space,
compute_objective=compute_objective,
n_trials=30, # how many trials to run
direction="minimize", # or "maximize" for metrics like accuracy/F1
backend="optuna", # "optuna", "ray", or "wandb"
)
[~Trainer.hyperparameter_search] returns a [~trainer.utils.BestRun] containing the objective value and best hyperparameter combination.
best_run = trainer.hyperparameter_search(...)
best_run.objective # 0.38 (best eval loss)
best_run.hyperparameters # {"learning_rate": 5e-5, "num_train_epochs": 4, ...}
Apply the best hyperparameters to [TrainingArguments] and retrain on the full dataset.