docs/source/en/community_integrations/nemo_automodel_pretraining.md
NeMo Automodel is an open-source PyTorch DTensor-native training library from NVIDIA. It supports large and small scale pretraining and fine-tuning for LLMs and VLMs for fast experimentation in research and production environments, with parallelism strategies including FSDP2, tensor, pipeline, expert, and context parallelism. For high throughput, it integrates kernels from DeepEP and TransformerEngine.
# Instantiating Nemotron v3 Nano with expert parallelism, FSDP, and TransformerEngine + DeepEP kernels.
import os
import torch
import torch.distributed as dist
from nemo_automodel import NeMoAutoModelForCausalLM
from nemo_automodel.recipes._dist_setup import setup_distributed
dist.init_process_group(backend="nccl")
torch.cuda.set_device(int(os.environ.get("LOCAL_RANK", 0)))
torch.manual_seed(1111)
dist_setup = setup_distributed(
{
"strategy": "fsdp2",
"dp_size": None, # will be inferred from world_size and other parallelism sizes
"dp_replicate_size": None,
"tp_size": 1,
"pp_size": 1,
"cp_size": 1,
"ep_size": 8,
},
world_size=dist.get_world_size(),
)
kwargs = {
"device_mesh": dist_setup.device_mesh,
"moe_mesh": dist_setup.moe_mesh,
"distributed_config": dist_setup.strategy_config,
"moe_config": dist_setup.moe_config,
}
model = NeMoAutoModelForCausalLM.from_pretrained("nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16", **kwargs)
print(model)
dist.destroy_process_group()
Launch the script with torchrun using the command below.
torchrun --nproc-per-node=8 /path/to/script
AutoModel.from_pretrained], with dynamic high-performance layer swaps and support for more refined parallelisms like Expert Parallelism (EP).AutoConfig.from_pretrained] to automatically load custom implementations like Nemotron Nano V3.