megatron/core/README.md
# Install Megatron Core
uv pip install megatron-core
# Distributed training example (2 GPUs, mock data)
torchrun --nproc_per_node=2 examples/run_simple_mcore_train_loop.py
Megatron Core is an open-source PyTorch-based library that contains GPU-optimized techniques and cutting-edge system-level optimizations. It abstracts them into composable and modular APIs, allowing full flexibility for developers and model researchers to train custom transformers at-scale on NVIDIA accelerated computing infrastructure.
Examples:
Documentation:
For complete installation instructions, performance benchmarks, and ecosystem information, see the main README.