docs_new/docs/advanced_features/sgl_model_gateway.mdx
SGLang Model Gateway is a high-performance model-routing gateway for large-scale LLM deployments. It centralizes worker lifecycle management, balances traffic across heterogeneous protocols (HTTP, gRPC, OpenAI-compatible), and provides enterprise-ready control over history storage, MCP tooling, and privacy-sensitive workflows. The gateway is deeply optimized for the SGLang serving runtime, but can route to any OpenAI-compatible backend.
--enable-igw) dynamically instantiates multiple router stacks (HTTP regular/PD, gRPC) and applies per-model policies for multi-tenant deployments./v1/responses, native MCP client (STDIO/HTTP/SSE/Streamable), and history storage all operate within the router boundary./get_server_info, /get_model_info), tracks load, and registers/removes workers in the shared registry./workers/{worker_id}) so clients can track onboarding progress./generate, /v1/chat/completions, /v1/completions, /v1/responses, /v1/embeddings, /v1/rerank, /v1/classify, /v1/tokenize, /v1/detokenize, and associated admin endpoints./v1/responses agentic flows, MCP sessions, and conversation APIs share the same storage layer, enabling compliance for regulated workloads.Pre-built Docker images are available on Docker Hub with multi-architecture support (x86_64 and ARM64):
docker pull lmsysorg/sgl-model-gateway:latest
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source "$HOME/.cargo/env"
rustc --version
cargo --version
pip and virtualenv tooling available.cd sgl-model-gateway
cargo build --release
pip install maturin
# Fast development mode
cd sgl-model-gateway/bindings/python
maturin develop
# Production build
maturin build --release --out dist --features vendored-openssl
pip install --force-reinstall dist/*.whl
# Rust binary
./target/release/sgl-model-gateway \
--worker-urls http://worker1:8000 http://worker2:8000 \
--policy cache_aware
# Python launcher
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 http://worker2:8000 \
--policy cache_aware
python -m sglang_router.launch_router \
--worker-urls grpc://127.0.0.1:20000 \
--model-path meta-llama/Llama-3.1-8B-Instruct \
--reasoning-parser deepseek-r1 \
--tool-call-parser json \
--host 0.0.0.0 --port 8080
Launch the router and a fleet of SGLang workers in one process:
python -m sglang_router.launch_server \
--model meta-llama/Meta-Llama-3.1-8B-Instruct \
--dp-size 4 \
--host 0.0.0.0 \
--port 30000
Comprehensive example with router arguments (prefixed with --router-):
python -m sglang_router.launch_server \
--host 0.0.0.0 \
--port 8080 \
--model meta-llama/Llama-3.1-8B-Instruct \
--tp-size 1 \
--dp-size 8 \
--grpc-mode \
--log-level debug \
--router-prometheus-port 10001 \
--router-tool-call-parser llama \
--router-model-path meta-llama/Llama-3.1-8B-Instruct \
--router-policy round_robin \
--router-log-level debug
Run workers independently and point the router at their HTTP endpoints:
# Worker nodes
python -m sglang.launch_server --model meta-llama/Meta-Llama-3.1-8B-Instruct --port 8000
python -m sglang.launch_server --model meta-llama/Meta-Llama-3.1-8B-Instruct --port 8001
# Router node
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 http://worker2:8001 \
--policy cache_aware \
--host 0.0.0.0 --port 30000
Use SRT gRPC workers to unlock the highest throughput and access native reasoning/tool pipelines:
# Workers expose gRPC endpoints
python -m sglang.launch_server \
--model meta-llama/Llama-3.1-8B-Instruct \
--grpc-mode \
--port 20000
# Router
python -m sglang_router.launch_router \
--worker-urls grpc://127.0.0.1:20000 \
--model-path meta-llama/Llama-3.1-8B-Instruct \
--reasoning-parser deepseek-r1 \
--tool-call-parser json \
--host 0.0.0.0 --port 8080
The gRPC router supports both regular HTTP-equivalent serving and PD (prefill/decode) serving. Provide --tokenizer-path or --model-path (HuggingFace ID or local directory) whenever connection mode resolves to gRPC.
Split prefill and decode workers for PD-aware caching and balancing:
python -m sglang_router.launch_router \
--pd-disaggregation \
--prefill http://prefill1:30001 9001 \
--decode http://decode1:30011 \
--prefill-policy cache_aware \
--decode-policy power_of_two
Prefill entries accept an optional bootstrap port. PD mode merges prefill metadata with decode outputs and streams results back to the client.
Proxy OpenAI-compatible endpoints while keeping history and MCP sessions local:
python -m sglang_router.launch_router \
--backend openai \
--worker-urls https://api.openai.com \
--history-backend memory
OpenAI backend mode expects exactly one --worker-urls entry per router instance.
Enable IGW mode to route multiple models through a single router:
./target/release/sgl-model-gateway \
--enable-igw \
--policy cache_aware \
--max-concurrent-requests 512
# Register workers dynamically
curl -X POST http://localhost:30000/workers \
-H "Content-Type: application/json" \
-d '{
"url": "http://worker-a:8000",
"model_id": "mistral",
"priority": 10,
"labels": {"tier": "gold"}
}'
The gateway provides HTTP endpoints for text tokenization with batch support, designed to mirror the SGLang Python tokenization API.
<table style={{width: "100%", borderCollapse: "collapse", tableLayout: "fixed"}}> <colgroup> <col style={{width: "34%"}} /> <col style={{width: "33%"}} /> <col style={{width: "33%"}} /> </colgroup> <thead> <tr style={{borderBottom: "2px solid #d55816"}}> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.02)"}}>Method</th> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.05)"}}>Path</th> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.02)"}}>Description</th> </tr> </thead> <tbody> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`POST`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>`/v1/tokenize`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Tokenize text to token IDs (single or batch)</td> </tr> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`POST`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>`/v1/detokenize`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Convert token IDs back to text (single or batch)</td> </tr> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`POST`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>`/v1/tokenizers`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Register a new tokenizer (async, returns job status)</td> </tr> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`GET`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>`/v1/tokenizers`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>List all registered tokenizers</td> </tr> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`GET`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>`/v1/tokenizers/{id}`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Get tokenizer info by UUID</td> </tr> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`GET`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>`/v1/tokenizers/{id}/status`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Check async tokenizer loading status</td> </tr> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`DELETE`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>`/v1/tokenizers/{id}`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Remove a tokenizer from the registry</td> </tr> </tbody> </table>{
"model": "meta-llama/Llama-3.1-8B-Instruct",
"prompt": "Hello, world!"
}
{
"model": "meta-llama/Llama-3.1-8B-Instruct",
"prompt": ["Hello", "World", "How are you?"]
}
{
"tokens": [15339, 11, 1917, 0],
"count": 4,
"char_count": 13
}
{
"model": "meta-llama/Llama-3.1-8B-Instruct",
"tokens": [15339, 11, 1917, 0],
"skip_special_tokens": true
}
{
"text": "Hello, world!"
}
curl -X POST http://localhost:30000/v1/tokenizers \
-H "Content-Type: application/json" \
-d '{"name": "llama3", "source": "meta-llama/Llama-3.1-8B-Instruct"}'
Response:
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"status": "pending",
"message": "Tokenizer registration queued"
}
Check status:
curl http://localhost:30000/v1/tokenizers/550e8400-e29b-41d4-a716-446655440000/status
The gateway provides admin endpoints for parsing reasoning content and function calls from LLM outputs.
<table style={{width: "100%", borderCollapse: "collapse", tableLayout: "fixed"}}> <colgroup> <col style={{width: "34%"}} /> <col style={{width: "33%"}} /> <col style={{width: "33%"}} /> </colgroup> <thead> <tr style={{borderBottom: "2px solid #d55816"}}> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.02)"}}>Method</th> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.05)"}}>Path</th> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.02)"}}>Description</th> </tr> </thead> <tbody> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`POST`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>`/parse/reasoning`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Separate reasoning (`<think>`) from normal text</td> </tr> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`POST`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>`/parse/function_call`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Parse function/tool calls from text</td> </tr> </tbody> </table>{
"text": "<think>Let me analyze this step by step...</think>The answer is 42.",
"parser": "deepseek-r1"
}
{
"normal_text": "The answer is 42.",
"reasoning_text": "Let me analyze this step by step..."
}
{
"text": "{\"name\": \"get_weather\", \"arguments\": {\"city\": \"NYC\"}}",
"parser": "json"
}
The /v1/classify endpoint provides text classification using sequence classification models (e.g., Qwen2ForSequenceClassification, BertForSequenceClassification).
curl http://localhost:30000/v1/classify \
-H "Content-Type: application/json" \
-d '{
"model": "jason9693/Qwen2.5-1.5B-apeach",
"input": "I love this product!"
}'
{
"id": "classify-a1b2c3d4-5678-90ab-cdef-1234567890ab",
"object": "list",
"created": 1767034308,
"model": "jason9693/Qwen2.5-1.5B-apeach",
"data": [
{
"index": 0,
"label": "positive",
"probs": [0.12, 0.88],
"num_classes": 2
}
],
"usage": {
"prompt_tokens": 6,
"completion_tokens": 0,
"total_tokens": 6
}
}
id2label field); models without this mapping use generic labels (LABEL_0, LABEL_1, etc.)curl -X POST http://localhost:30000/workers \
-H "Content-Type: application/json" \
-d '{"url":"grpc://0.0.0.0:31000","worker_type":"regular"}'
curl http://localhost:30000/workers
Response:
{
"workers": [
{
"id": "2f3a0c3e-3a7b-4c3f-8c70-1b7d4c3a6e1f",
"url": "http://0.0.0.0:31378",
"model_id": "mistral",
"priority": 50,
"cost": 1.0,
"worker_type": "regular",
"is_healthy": true,
"load": 0,
"connection_mode": "Http"
}
],
"total": 1,
"stats": {
"prefill_count": 0,
"decode_count": 0,
"regular_count": 1
}
}
--cache-threshold 0.5 \
--balance-abs-threshold 32 \
--balance-rel-threshold 1.5 \
--eviction-interval-secs 120 \
--max-tree-size 67108864
Configure exponential backoff retries:
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 http://worker2:8001 \
--retry-max-retries 5 \
--retry-initial-backoff-ms 50 \
--retry-max-backoff-ms 30000 \
--retry-backoff-multiplier 1.5 \
--retry-jitter-factor 0.2
Retryable Status Codes: 408, 429, 500, 502, 503, 504
Per-worker circuit breakers prevent cascading failures:
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 http://worker2:8001 \
--cb-failure-threshold 5 \
--cb-success-threshold 2 \
--cb-timeout-duration-secs 30 \
--cb-window-duration-secs 60
Circuit Breaker States:
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 http://worker2:8001 \
--max-concurrent-requests 256 \
--rate-limit-tokens-per-second 512 \
--queue-size 128 \
--queue-timeout-secs 30
Requests beyond the concurrency limit wait in a FIFO queue. Returns:
429 Too Many Requests when queue is full408 Request Timeout when queue timeout expires--health-check-interval-secs 30 \
--health-check-timeout-secs 10 \
--health-success-threshold 2 \
--health-failure-threshold 3 \
--health-check-endpoint /health
The gateway includes built-in reasoning parsers for models that use Chain-of-Thought (CoT) reasoning with explicit thinking blocks.
python -m sglang_router.launch_router \
--worker-urls grpc://127.0.0.1:20000 \
--model-path deepseek-ai/DeepSeek-R1 \
--reasoning-parser deepseek-r1
The gRPC router automatically:
The gateway supports parsing function/tool calls from LLM outputs in multiple formats.
python -m sglang_router.launch_router \
--worker-urls grpc://127.0.0.1:20000 \
--model-path meta-llama/Llama-3.1-8B-Instruct \
--tool-call-parser json
The gateway supports multiple tokenizer backends:
tokenizer.json or directory# HuggingFace model
--model-path meta-llama/Llama-3.1-8B-Instruct
# Local tokenizer
--tokenizer-path /path/to/tokenizer.json
# With chat template override
--chat-template /path/to/template.jinja
Two-level caching for optimal performance:
<table style={{width: "100%", borderCollapse: "collapse", tableLayout: "fixed"}}> <colgroup> <col style={{width: "34%"}} /> <col style={{width: "33%"}} /> <col style={{width: "33%"}} /> </colgroup> <thead> <tr style={{borderBottom: "2px solid #d55816"}}> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.02)"}}>Cache</th> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.05)"}}>Type</th> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.02)"}}>Description</th> </tr> </thead> <tbody> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>L0</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>Exact match</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Whole-string caching for repeated prompts</td> </tr> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>L1</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>Prefix match</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Prefix boundary matching for incremental prompts</td> </tr> </tbody> </table>--enable-l0-cache \
--l0-max-entries 10000 \
--enable-l1-cache \
--l1-max-memory 52428800 # 50MB
The gateway provides native Model Context Protocol (MCP) client integration for tool execution.
python -m sglang_router.launch_router \
--mcp-config-path /path/to/mcp-config.yaml \
--worker-urls http://worker1:8000
servers:
- name: "filesystem"
command: "npx"
args: ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
protocol: "stdio"
required: false
- name: "github"
url: "https://api.github.com/mcp"
token: "ghp_xxxxx"
protocol: "sse"
required: false
- name: "custom-tools"
url: "https://tools.example.com/mcp"
protocol: "streamable"
required: true
pool:
max_connections: 100
idle_timeout: 300
proxy:
http: "http://proxy.internal:8080"
https: "https://proxy.internal:8443"
no_proxy: "localhost,127.0.0.1,*.internal"
inventory:
enable_refresh: true
tool_ttl: 300
refresh_interval: 300
Enable automatic worker discovery via Kubernetes pod selectors:
python -m sglang_router.launch_router \
--service-discovery \
--selector app=sglang-worker role=inference \
--service-discovery-namespace production \
--service-discovery-port 8000
--pd-disaggregation \
--prefill-selector app=sglang component=prefill \
--decode-selector app=sglang component=decode \
--service-discovery
Prefill pods can expose bootstrap ports via the sglang.ai/bootstrap-port annotation. RBAC must allow get, list, and watch on pods.
# Connection descriptor
export ATP_DSN="(description=(address=(protocol=tcps)(port=1522)(host=adb.region.oraclecloud.com))(connect_data=(service_name=service_name)))"
# Or TNS alias (requires wallet)
export ATP_TNS_ALIAS="sglroutertestatp_high"
export ATP_WALLET_PATH="/path/to/wallet"
# Credentials
export ATP_USER="admin"
export ATP_PASSWORD="secret"
export ATP_POOL_MIN=4
export ATP_POOL_MAX=32
python -m sglang_router.launch_router \
--backend openai \
--worker-urls https://api.openai.com \
--history-backend oracle
export POSTGRES_DB_URL="postgres://user:password@host:5432/dbname"
python -m sglang_router.launch_router \
--backend openai \
--worker-urls https://api.openai.com \
--history-backend postgres
export REDIS_URL="redis://localhost:6379"
export REDIS_POOL_MAX=16
export REDIS_RETENTION_DAYS=30
python -m sglang_router.launch_router \
--backend openai \
--worker-urls https://api.openai.com \
--history-backend redis \
--redis-retention-days 30
Use --redis-retention-days -1 for persistent storage (default is 30 days).
The gateway supports WebAssembly (WASM) middleware modules for custom request/response processing. This enables organization-specific logic for authentication, rate limiting, billing, logging, and more—without modifying or recompiling the gateway.
WASM middleware runs in a sandboxed environment with memory isolation, no network/filesystem access, and configurable resource limits.
<table style={{width: "100%", borderCollapse: "collapse", tableLayout: "fixed"}}> <colgroup> <col style={{width: "34%"}} /> <col style={{width: "33%"}} /> <col style={{width: "33%"}} /> </colgroup> <thead> <tr style={{borderBottom: "2px solid #d55816"}}> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.02)"}}>Attach Point</th> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.05)"}}>When Executed</th> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.02)"}}>Use Cases</th> </tr> </thead> <tbody> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`OnRequest`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>Before forwarding to workers</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Auth, rate limiting, request modification</td> </tr> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`OnResponse`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>After receiving worker response</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Logging, response modification, error handling</td> </tr> </tbody> </table> <table style={{width: "100%", borderCollapse: "collapse", tableLayout: "fixed"}}> <colgroup> <col style={{width: "50%"}} /> <col style={{width: "50%"}} /> </colgroup> <thead> <tr style={{borderBottom: "2px solid #d55816"}}> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.02)"}}>Action</th> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.05)"}}>Description</th> </tr> </thead> <tbody> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`Continue`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>Proceed without modification</td> </tr> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`Reject(status)`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>Reject request with HTTP status code</td> </tr> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`Modify(...)`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>Modify headers, body, or status</td> </tr> </tbody> </table>Complete working examples are available in examples/wasm/:
The interface definition is located at src/wasm/interface.
# Prerequisites
rustup target add wasm32-wasip2
cargo install wasm-tools
# Build
cargo build --target wasm32-wasip2 --release
# Convert to component format
wasm-tools component new \
target/wasm32-wasip2/release/my_middleware.wasm \
-o my_middleware.component.wasm
# Enable WASM support
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 \
--enable-wasm
# Upload module
curl -X POST http://localhost:30000/wasm \
-H "Content-Type: application/json" \
-d '{
"modules": [{
"name": "auth-middleware",
"file_path": "/absolute/path/to/auth.component.wasm",
"module_type": "Middleware",
"attach_points": [{"Middleware": "OnRequest"}]
}]
}'
# List modules
curl http://localhost:30000/wasm
# Remove module
curl -X DELETE http://localhost:30000/wasm/{module_uuid}
Note: Rate limiting state is per-worker thread and not shared across gateway replicas. For production, consider implementing rate limiting at a shared layer (e.g., Redis)
SGLang Model Gateway provides official language bindings for Python and Go, enabling integration with different technology stacks and organizational requirements.
The Python bindings provide a PyO3-based wrapper around the Rust gateway library. This is a straightforward binding that calls the gateway server startup from Python.
# From PyPI
pip install sglang-router
# Development build
cd sgl-model-gateway/bindings/python
pip install maturin && maturin develop --features vendored-openssl
The Python bindings are used throughout this documentation. See the Quick Start and Deployment Modes sections for detailed examples.
Key components:
RouterArgs dataclass with 50+ configuration optionsRouter.from_args() for programmatic startupsmg launch, smg server, python -m sglang_router.launch_routerThe Go bindings provide a high-performance gRPC client library for organizations with Go-based infrastructure. This is ideal for:
+-------------------------------------------+
| High-Level Go API |
| (client.go - OpenAI-style interface) |
+-------------------------------------------+
| gRPC Layer |
+-------------------------------------------+
| Rust FFI Layer |
| (Tokenization, Parsing, Conversion) |
+-------------------------------------------+
Key Features:
# Build the FFI library first
cd sgl-model-gateway/bindings/golang
make build && make lib
# Then use in your Go project
go get github.com/sgl-project/sgl-go-sdk
Requirements: Go 1.24+, Rust toolchain
Complete working examples are available in bindings/golang/examples/:
# Run examples
cd sgl-model-gateway/bindings/golang/examples/simple && ./run.sh
cd sgl-model-gateway/bindings/golang/examples/streaming && ./run.sh
cd sgl-model-gateway/bindings/golang/examples/oai_server && ./run.sh
cd sgl-model-gateway/bindings/golang
# Unit tests
go test -v ./...
# Integration tests (requires running SGLang server)
export SGL_GRPC_ENDPOINT=grpc://localhost:20000
export SGL_TOKENIZER_PATH=/path/to/tokenizer
go test -tags=integration -v ./...
When to Use Python: Launching and managing the gateway server, service discovery, PD disaggregation.
When to Use Go: Building custom client applications, integration with Go microservices, OpenAI-compatible proxy servers
python -m sglang_router.launch_router \
--api-key "your-router-api-key" \
--worker-urls http://worker1:8000
Clients must supply Authorization: Bearer <key> for protected endpoints.
# Add worker with explicit key
curl -H "Authorization: Bearer router-key" \
-X POST http://localhost:8080/workers \
-H "Content-Type: application/json" \
-d '{"url":"http://worker:8000","api_key":"worker-key"}'
Enable TLS to serve the gateway over HTTPS:
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 \
--tls-cert-path /path/to/server.crt \
--tls-key-path /path/to/server.key
Both parameters must be provided together. The gateway uses rustls with the ring crypto provider for TLS termination. If TLS is not configured, the gateway falls back to plain HTTP.
Enable mutual TLS (mTLS) for secure communication with workers in HTTP mode:
python -m sglang_router.launch_router \
--worker-urls https://worker1:8443 https://worker2:8443 \
--client-cert-path /path/to/client.crt \
--client-key-path /path/to/client.key \
--ca-cert-path /path/to/ca.crt
Key Points:
--ca-cert-path flagsGateway HTTPS + Worker mTLS + API Key authentication:
python -m sglang_router.launch_router \
--worker-urls https://worker1:8443 https://worker2:8443 \
--tls-cert-path /etc/certs/server.crt \
--tls-key-path /etc/certs/server.key \
--client-cert-path /etc/certs/client.crt \
--client-key-path /etc/certs/client.key \
--ca-cert-path /etc/certs/ca.crt \
--api-key "secure-api-key" \
--policy cache_aware
Enable with --prometheus-host/--prometheus-port (defaults to 0.0.0.0:29000).
1ms, 5ms, 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1s, 2.5s, 5s, 10s, 15s, 30s, 45s, 60s, 90s, 120s, 180s, 240s
Enable distributed tracing with OTLP export:
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 \
--enable-trace \
--otlp-traces-endpoint localhost:4317
sgl-routerpython -m sglang_router.launch_router \
--worker-urls http://worker1:8000 \
--log-level debug \
--log-dir ./router_logs
Structured tracing with optional file sink. Log levels: debug, info, warn, error.
--request-id-headers x-request-id x-trace-id x-correlation-id
Responses include x-request-id header for correlation.
This section provides guidance for deploying SGLang Model Gateway in production environments.
Always enable TLS in production:
python -m sglang_router.launch_router \
--worker-urls https://worker1:8443 https://worker2:8443 \
--tls-cert-path /etc/certs/server.crt \
--tls-key-path /etc/certs/server.key \
--client-cert-path /etc/certs/client.crt \
--client-key-path /etc/certs/client.key \
--ca-cert-path /etc/certs/ca.crt \
--api-key "${ROUTER_API_KEY}"
Security Checklist:
--api-key to protect router endpointsScaling Strategy:
The gateway supports running multiple replicas behind a load balancer for high availability. However, there are important considerations:
<table style={{width: "100%", borderCollapse: "collapse", tableLayout: "fixed"}}> <colgroup> <col style={{width: "34%"}} /> <col style={{width: "33%"}} /> <col style={{width: "33%"}} /> </colgroup> <thead> <tr style={{borderBottom: "2px solid #d55816"}}> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.02)"}}>Component</th> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.05)"}}>Shared Across Replicas</th> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.02)"}}>Impact</th> </tr> </thead> <tbody> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>Worker Registry</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>No (independent)</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Each replica discovers workers independently</td> </tr> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>Radix Cache Tree</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>No (independent)</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Cache hits may decrease by 10-20%</td> </tr> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>Circuit Breaker State</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>No (independent)</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Each replica tracks failures independently</td> </tr> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>Rate Limiting</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>No (independent)</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Limits apply per-replica, not globally</td> </tr> </tbody> </table>Recommendations:
Prefer horizontal scaling over vertical scaling: Deploy multiple smaller gateway replicas rather than one large instance with excessive CPU and memory. This provides:
Use Kubernetes Service Discovery: Let the gateway automatically discover and manage workers:
python -m sglang_router.launch_router \
--service-discovery \
--selector app=sglang-worker \
--service-discovery-namespace production
Accept cache efficiency trade-off: With multiple replicas, the cache-aware routing policy's radix tree is not synchronized across replicas. This means:
Configure session affinity (optional): If cache efficiency is critical, configure your load balancer for session affinity based on a consistent hash of the request (e.g., user ID or API key).
Example HA Architecture:
+-------------------+
| Load Balancer |
| (L4/L7) |
+---------+---------+
|
+-------------------+-------------------+
| | |
v v v
+-----------+ +-----------+ +-----------+
| Gateway | | Gateway | | Gateway |
| Replica 1 | | Replica 2 | | Replica 3 |
+-----+-----+ +-----+-----+ +-----+-----+
| | |
+-------------------+-------------------+
|
+-------------------+-------------------+
| | |
v v v
+-----------+ +-----------+ +-----------+
| Worker | | Worker | | Worker |
| Pod 1 | | Pod 2 | | Pod N |
+-----------+ +-----------+ +-----------+
Use gRPC mode for high throughput:
gRPC mode provides the highest performance for SGLang workers:
# Start workers in gRPC mode
python -m sglang.launch_server \
--model meta-llama/Llama-3.1-8B-Instruct \
--grpc-mode \
--port 20000
# Configure gateway for gRPC
python -m sglang_router.launch_router \
--worker-urls grpc://worker1:20000 grpc://worker2:20000 \
--model-path meta-llama/Llama-3.1-8B-Instruct \
--policy cache_aware
Performance Benefits of gRPC:
Tuning Recommendations:
<table style={{width: "100%", borderCollapse: "collapse", tableLayout: "fixed"}}> <colgroup> <col style={{width: "34%"}} /> <col style={{width: "33%"}} /> <col style={{width: "33%"}} /> </colgroup> <thead> <tr style={{borderBottom: "2px solid #d55816"}}> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.02)"}}>Parameter</th> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.05)"}}>Recommendation</th> <th style={{textAlign: "left", padding: "10px 12px", fontWeight: 700, whiteSpace: "nowrap", backgroundColor: "rgba(255,255,255,0.02)"}}>Reason</th> </tr> </thead> <tbody> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`--policy`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>`cache_aware`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Best for repeated prompts, ~30% latency reduction</td> </tr> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`--max-concurrent-requests`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>2-4x worker count</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Prevent overload while maximizing throughput</td> </tr> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`--queue-size`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>2x max-concurrent</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Buffer for burst traffic</td> </tr> <tr> <td style={{padding: "9px 12px", fontWeight: 500, backgroundColor: "rgba(255,255,255,0.02)"}}>`--request-timeout-secs`</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.05)"}}>Based on max generation length</td> <td style={{padding: "9px 12px", backgroundColor: "rgba(255,255,255,0.02)"}}>Prevent stuck requests</td> </tr> </tbody> </table>Pod Labeling for Service Discovery:
For the gateway to discover workers automatically, label your worker pods consistently:
# Worker Deployment (Regular Mode)
apiVersion: apps/v1
kind: Deployment
metadata:
name: sglang-worker
namespace: production
spec:
replicas: 4
selector:
matchLabels:
app: sglang-worker
component: inference
template:
metadata:
labels:
app: sglang-worker
component: inference
model: llama-3-8b
spec:
containers:
- name: worker
image: lmsysorg/sglang:latest
ports:
- containerPort: 8000
name: http
- containerPort: 20000
name: grpc
Gateway configuration for discovery:
python -m sglang_router.launch_router \
--service-discovery \
--selector app=sglang-worker component=inference \
--service-discovery-namespace production \
--service-discovery-port 8000
PD (Prefill/Decode) Mode Labeling:
# Prefill Worker
metadata:
labels:
app: sglang-worker
component: prefill
annotations:
sglang.ai/bootstrap-port: "9001"
# Decode Worker
metadata:
labels:
app: sglang-worker
component: decode
Gateway configuration for PD discovery:
python -m sglang_router.launch_router \
--service-discovery \
--pd-disaggregation \
--prefill-selector app=sglang-worker component=prefill \
--decode-selector app=sglang-worker component=decode \
--service-discovery-namespace production
RBAC Requirements:
The gateway needs permissions to watch pods:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: sglang-gateway
namespace: production
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
***
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sglang-gateway
namespace: production
subjects:
- kind: ServiceAccount
name: sglang-gateway
namespace: production
roleRef:
kind: Role
name: sglang-gateway
apiGroup: rbac.authorization.k8s.io
Configure Prometheus to scrape the gateway metrics endpoint (default: :29000/metrics).
Essential Dashboards:
1. Request Rate and Latency:
# Request rate by endpoint
sum(rate(smg_http_requests_total[5m])) by (path, method)
# P50 latency
histogram_quantile(0.50, sum(rate(smg_http_request_duration_seconds_bucket[5m])) by (le))
# P99 latency
histogram_quantile(0.99, sum(rate(smg_http_request_duration_seconds_bucket[5m])) by (le))
# Error rate
sum(rate(smg_http_responses_total{status=~"5.."}[5m])) / sum(rate(smg_http_responses_total[5m]))
2. Worker Health:
# Healthy workers
sum(smg_worker_pool_size)
# Active connections per worker
smg_worker_connections_active
# Worker health check failures
sum(rate(smg_worker_health_checks_total{result="failure"}[5m])) by (worker_id)
3. Circuit Breaker Status:
# Circuit breaker states (0=closed, 1=open, 2=half-open)
smg_worker_cb_state
# Circuit breaker transitions
sum(rate(smg_worker_cb_transitions_total[5m])) by (worker_id, from_state, to_state)
# Workers with open circuits
count(smg_worker_cb_state == 1)
4. Inference Performance (gRPC mode):
# Time to first token (P50)
histogram_quantile(0.50, sum(rate(smg_router_ttft_seconds_bucket[5m])) by (le, model))
# Time per output token (P99)
histogram_quantile(0.99, sum(rate(smg_router_tpot_seconds_bucket[5m])) by (le, model))
# Token throughput
sum(rate(smg_router_tokens_total[5m])) by (model, direction)
# Generation duration P95
histogram_quantile(0.95, sum(rate(smg_router_generation_duration_seconds_bucket[5m])) by (le))
5. Rate Limiting and Queuing:
# Rate limit rejections
sum(rate(smg_http_rate_limit_total{decision="rejected"}[5m]))
# Queue depth (if using concurrency limiting)
smg_worker_requests_active
# Retry attempts
sum(rate(smg_worker_retries_total[5m])) by (worker_id)
# Exhausted retries (failures after all retries)
sum(rate(smg_worker_retries_exhausted_total[5m]))
6. MCP Tool Execution:
# Tool call rate
sum(rate(smg_mcp_tool_calls_total[5m])) by (server, tool)
# Tool latency P95
histogram_quantile(0.95, sum(rate(smg_mcp_tool_duration_seconds_bucket[5m])) by (le, tool))
# Active MCP server connections
smg_mcp_servers_active
Alerting Rules Example:
groups:
- name: sglang-gateway
rules:
- alert: HighErrorRate
expr: |
sum(rate(smg_http_responses_total{status=~"5.."}[5m]))
/ sum(rate(smg_http_responses_total[5m])) > 0.05
for: 5m
labels:
severity: critical
annotations:
summary: "High error rate on SGLang Gateway"
- alert: CircuitBreakerOpen
expr: count(smg_worker_cb_state == 1) > 0
for: 2m
labels:
severity: warning
annotations:
summary: "Worker circuit breaker is open"
- alert: HighLatency
expr: |
histogram_quantile(0.99, sum(rate(smg_http_request_duration_seconds_bucket[5m])) by (le)) > 30
for: 5m
labels:
severity: warning
annotations:
summary: "P99 latency exceeds 30 seconds"
- alert: NoHealthyWorkers
expr: sum(smg_worker_pool_size) == 0
for: 1m
labels:
severity: critical
annotations:
summary: "No healthy workers available"
Increase --worker-startup-timeout-secs or ensure health probes respond before router startup.
Inspect smg_router_requests_total by worker and tune cache-aware thresholds (--balance-*, --cache-threshold).
Increase --cb-failure-threshold or extend the timeout/window durations. Consider temporarily disabling retries.
Increase --queue-size or reduce client concurrency. Ensure --max-concurrent-requests matches downstream capacity.
Reduce --max-tree-size or lower --eviction-interval-secs for more aggressive cache pruning.
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 \
--log-level debug \
--log-dir ./router_logs
Ensure workers are started with --grpc-mode and verify --model-path or --tokenizer-path is provided to the router.
Check HuggingFace Hub credentials (HF_TOKEN environment variable) for private models. Verify local paths are accessible.
SGLang Model Gateway continues to evolve alongside the SGLang runtime. Keep CLI flags, integrations, and documentation aligned when adopting new features or contributing improvements.