docs/source/en/perf_train_gaudi.md
The Intel Gaudi AI accelerator family includes Intel Gaudi 1, Intel Gaudi 2, and Intel Gaudi 3. Each server has 8 Habana Processing Units (HPUs) with 128GB of memory on Gaudi 3, 96GB on Gaudi 2, and 32GB on first-gen Gaudi. The Gaudi Architecture overview covers the hardware in depth.
[TrainingArguments], [Trainer], and [Pipeline] detect Intel Gaudi devices and set the backend to hpu automatically.
HPU lazy mode isn't compatible with all Transformers modeling code. Set the environment variable below to switch to eager mode if there are errors.
export PT_HPU_LAZY_MODE=0
You may also need to enable int64 support to avoid casting issues with long integers.
export PT_ENABLE_INT64_SUPPORT=1
All Gaudi generations support bf16 natively.
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir="./outputs",
bf16=True, # supported on all Gaudi generations
)
Gaudi supports torch.compile. [TrainingArguments] automatically sets torch_compile_backend to "hpu_backend" when HPU is detected.
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir="./outputs",
torch_compile=True,
)
Multi-HPU training uses HCCL (Habana Collective Communications Library) as the distributed backend. HCCL is the default, but you can also set ddp_backend explicitly.
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir="./outputs",
ddp_backend="hccl",
)