doc/en/kt-kernel/deepseek-v3.2-sglang-tutorial.md
This tutorial demonstrates how to run DeepSeek V3.2 model inference using SGLang integrated with KT-Kernel for CPU-GPU heterogeneous inference. This setup enables efficient deployment of large MoE models by offloading experts to CPU.
Minimum Configuration:
Tested Configuration:
Before starting, ensure you have:
pip install kt-kernel sglang-kt or run ./install.sh from the ktransformers rootpip install huggingface-hub
DeepSeek V3.2 requires downloading model repositories:
# Create a directory for models
mkdir -p /path/to/models
cd /path/to/models
# Download DeepSeek-V3.2 (FP8 weights for GPU)
huggingface-cli download deepseek-ai/DeepSeek-V3.2 \
--local-dir /path/to/deepseek-v3.2
# Download DeepSeek-V3.2-Speciale (if needed)
huggingface-cli download deepseek-ai/DeepSeek-V3.2-Speciale \
--local-dir /path/to/deepseek-v3.2-speciale
Note: Replace /path/to/models with your actual storage path throughout this tutorial.
Convert the FP8 GPU weights to INT4 quantized CPU weights using the provided conversion script.
For a 2-NUMA system with 60 physical cores:
cd /path/to/ktransformers/kt-kernel
python scripts/convert_cpu_weights.py \
--input-path /path/to/deepseek-v3.2 \
--input-type fp8 \
--output /path/to/deepseek-v3.2-INT4 \
--quant-method int4 \
--cpuinfer-threads 60 \
--threadpool-count 2 \
--no-merge-safetensor
Start the SGLang server with KT-Kernel integration for CPU-GPU heterogeneous inference.
For single NVIDIA L20 48GB + 2-NUMA CPU system:
python -m sglang.launch_server \
--host 0.0.0.0 \
--port 30000 \
--model /path/to/deepseek-v3.2 \
--kt-weight-path /path/to/deepseek-v3.2-INT4 \
--kt-cpuinfer 60 \
--kt-threadpool-count 2 \
--kt-num-gpu-experts 1 \
--attention-backend triton \
--trust-remote-code \
--mem-fraction-static 0.98 \
--chunked-prefill-size 4096 \
--max-running-requests 32 \
--max-total-tokens 40000 \
--served-model-name DeepSeek-V3.2 \
--enable-mixed-chunk \
--tensor-parallel-size 1 \
--enable-p2p-check \
--disable-shared-experts-fusion \
--kt-method AMXINT4
Once the server is running, you can send inference requests using the OpenAI-compatible API.
curl -s http://localhost:30000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "DeepSeek-V3.2",
"stream": false,
"messages": [
{"role": "user", "content": "hi"}
]
}'
{
"id": "adbb44f6aafb4b58b167e42fbbb1eed3",
"object": "chat.completion",
"created": 1764675126,
"model": "DeepSeek-V3.2",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hi there! 👋 \n\nThanks for stopping by! How can I help you today? Feel free to ask me anything - I'm here to assist with questions, explanations, conversations, or whatever you need! 😊\n\nIs there something specific on your mind, or would you like to know more about what I can do?",
"reasoning_content": null,
"tool_calls": null
},
"logprobs": null,
"finish_reason": "stop",
"matched_stop": 1
}
],
"usage": {
"prompt_tokens": 5,
"total_tokens": 72,
"completion_tokens": 67,
"prompt_tokens_details": null,
"reasoning_tokens": 0
},
"metadata": {
"weight_version": "default"
}
}