Back to Ktransformers

CPU-GPU Expert Scheduling Tutorial

doc/en/kt-kernel/experts-sched-Tutorial.md

0.6.27.8 KB
Original Source

CPU-GPU Expert Scheduling Tutorial

This tutorial demonstrates how to use the CPU-GPU expert scheduling feature in KTransformers with SGLang. This feature introduces a flexible GPU expert mask system that allows intelligent placement of MoE experts across CPU and GPU, optimizing inference performance based on workload patterns.

Table of Contents

Hardware Requirements

Minimum Configuration:

  • GPU: NVIDIA RTX 4090 24 GB (or equivalent with at least 24GB VRAM available)
  • CPU: x86 CPU with AVX512 support (e.g., Intel Sapphire Rapids, AMD EPYC)
  • RAM: At least 256GB system memory
  • Storage: Sufficient space for model weights

Tested Configuration:

  • GPU: 4 x NVIDIA GeForce RTX 4090 (24 GB)
  • CPU: Intel Xeon Gold 6454S
  • RAM: 512GB DDR5
  • OS: Linux (Ubuntu 20.04+ recommended)

Prerequisites

Before starting, ensure you have:

  1. SGLang installed

    Install the kvcache-ai fork of SGLang (one of):

    bash
    # Option A: One-click install (from ktransformers root)
    ./install.sh
    
    # Option B: pip install
    pip install sglang-kt
    
  2. KTransformers installed

    bash
    git clone https://github.com/kvcache-ai/ktransformers.git
    cd ktransformers/kt-kernel
    bash ./install.sh
    

    After installation, verify the CLI is working:

    bash
    kt version
    
  3. CUDA toolkit - CUDA 12.0+ recommended

  4. Hugging Face CLI - For downloading models:

    bash
    pip install -U huggingface-hub
    

Step 1: Download Model Weights

Download your preferred MoE model weights. This feature supports various MoE models including:

  • Qwen3-Next-80B-A3B-Instruct-FP8

    bash
    huggingface-cli download Qwen/Qwen3-Next-80B-A3B-Instruct-FP8 --local-dir /path/to/qwen3-next-80b
    

Step 2: Launch Server with Expert Scheduling

Basic Usage

The simplest way to start the server with expert scheduling:

bash
python -m sglang.launch_server \
    --model /path/to/model \
    --kt-num-gpu-experts 8 \
    --kt-expert-placement-strategy uniform

Expert Placement Strategies

The system provides four expert placement strategies:

StrategyDescriptionUse Case
uniformDistributes GPU experts evenly across all MoE layersDefault, no prior statistics needed
frequencyPlaces most frequently activated experts on GPUBest performance when activation statistics are available
front-loadingFills GPU experts from the first layer onwardsTesting or specific workload patterns
randomRandomly selects experts with fixed seed (42)Baseline comparison

Using Frequency Strategy (Recommended for best performance):

bash
python -m sglang.launch_server \
    --model /path/to/model \
    --kt-num-gpu-experts 8 \
    --kt-expert-placement-strategy frequency \
    --init-expert-location /path/to/activation_stats.pt

Using Dynamic Expert Update:

bash
python -m sglang.launch_server \
    --model /path/to/model \
    --kt-num-gpu-experts 8 \
    --kt-expert-placement-strategy frequency \
    --init-expert-location /path/to/activation_stats.pt \
    --kt-enable-dynamic-expert-update \
    --kt-gpu-prefill-token-threshold 512

Key Parameters

ParameterDescription
--kt-num-gpu-expertsNumber of GPU experts per MoE layer. Internally multiplied by the number of MoE layers to get the total GPU experts. Ignored if --kt-gpu-experts-ratio is set.
--kt-gpu-experts-ratioRatio of total experts to place on GPU (0.0-1.0). If set, overrides --kt-num-gpu-experts. Example: 0.1 means 10% of all experts across all layers will be on GPU.
--kt-expert-placement-strategyExpert placement strategy: frequency, uniform, front-loading, or random. Default: uniform.
--init-expert-locationPath to activation statistics file (.pt) for frequency strategy.
--kt-enable-dynamic-expert-updateEnable dynamic expert update during inference.
--kt-gpu-prefill-token-thresholdToken threshold for triggering dynamic expert redistribution during prefill.
--record-kt-gpu-expert-distributionEnable recording of GPU expert distribution for analysis.
--expert-distribution-recorder-modeRecording mode: stat (default), stat_approx, per_pass, or per_token.

Step 3: Send Inference Requests

Once the server is running (default: http://localhost:30000), you can interact with the model in several ways:

Option A: Interactive Chat with KT CLI

The easiest way to chat with the model:

bash
kt chat

This opens an interactive terminal chat session. Type your messages and press Enter to send. Use Ctrl+C to exit.

Option B: OpenAI-Compatible API

The server exposes an OpenAI-compatible API at http://localhost:30000/v1.

curl example (streaming):

bash
curl http://localhost:30000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "model-name",
    "messages": [{"role": "user", "content": "Hello!"}],
    "stream": true
  }'

Performance

Throughput (tokens/s)

The following benchmarks were measured on Qwen3-Next-80B-A3B-Instruct-FP8 with 4 x RTX 4090, Intel Xeon Gold 6454S, tensor parallel size 4, using ShareGPT dataset:

GPU Expert Ratiorandomuniformfront-loadingfrequencydynamic-expert-update
0%53.0152.9654.1852.7253.37
10%56.6356.5757.1858.6070.22
20%58.7560.2858.8261.9274.73
30%62.8662.0863.8766.5075.55
40%66.8166.8267.4572.7880.98
50%70.3865.2573.6576.1981.17
60%71.3372.8077.9582.3382.30
70%74.4076.1781.5989.3788.70
80%79.7179.2089.20100.6792.31
90%88.8281.0698.14107.1595.04
100%112.61112.32111.82114.26112.99

The frequency and dynamic-expert-update strategies show significant performance improvements over baseline strategies, especially at lower GPU expert ratios.

Troubleshooting

OOM (Out of Memory) Issues

If you encounter OOM, adjust these parameters when launching the server:

ParameterVRAM Impact
--kt-num-gpu-experts / --kt-gpu-experts-ratioReduces expert weight VRAM usage
--chunked-prefill-sizeReduces prefill extra VRAM allocation
--max-total-tokensReduces KV cache VRAM usage

Dynamic Expert Update Not Triggering

Ensure all conditions are met:

  1. --kt-enable-dynamic-expert-update is enabled
  2. --kt-gpu-prefill-token-threshold is set
  3. Prefill length >= threshold value

Statistics Recording

To save expert distribution statistics to a custom path, set the environment variable:

bash
export SGLANG_EXPERT_DISTRIBUTION_RECORDER_DIR=/path/to/output

Additional Resources