docs_new/cookbook/diffusion/Z-Image/Z-Image-Turbo.mdx
import { ZImageTurboDeployment } from '/src/snippets/diffusion/zimage-turbo-deployment.jsx';
Z-Image is a powerful and highly efficient image generation model family with 6B parameters, developed by Tongyi-MAI. It adopts a Scalable Single-Stream DiT (S3-DiT) architecture, where text, visual semantic tokens, and image VAE tokens are concatenated at the sequence level to serve as a unified input stream, maximizing parameter efficiency compared to dual-stream approaches.
Z-Image-Turbo is a distilled version of Z-Image that matches or exceeds leading competitors with only 8 NFEs (Number of Function Evaluations). It is powered by two core techniques: Decoupled-DMD (few-step distillation) and DMDR (fusing DMD with Reinforcement Learning).
Key Features:
For more details, please refer to the Z-Image-Turbo HuggingFace page, the GitHub repository, and the technical report (arXiv).
SGLang-diffusion offers multiple installation methods. You can choose the most suitable installation method based on your hardware platform and requirements.
Please refer to the official SGLang-diffusion installation guide for installation instructions.
This section provides deployment configurations optimized for different hardware platforms and use cases.
Z-Image-Turbo is optimized for high-quality image generation with only 8 inference steps. The recommended launch configurations vary by hardware.
Interactive Command Generator: Use the configuration selector below to automatically generate the appropriate deployment command for your hardware platform.
<ZImageTurboDeployment />Current supported optimization all listed here.
--vae-path: Path to a custom VAE model or HuggingFace model ID (e.g., fal/FLUX.2-Tiny-AutoEncoder). If not specified, the VAE will be loaded from the main model path.--num-gpus: Number of GPUs to use--tp-size: Tensor parallelism size (only for the encoder; should not be larger than 1 if text encoder offload is enabled, as layer-wise offload plus prefetch is faster)--sp-degree: Sequence parallelism size (typically should match the number of GPUs)--ulysses-degree: The degree of DeepSpeed-Ulysses-style SP in USP--ring-degree: The degree of ring attention-style SP in USPAMD ROCm Notes: Requires SGLang >= v0.5.8.
For complete API documentation, please refer to the official API usage guide.
import base64
from openai import OpenAI
client = OpenAI(api_key="EMPTY", base_url="http://localhost:30000/v1")
response = client.images.generate(
model="Tongyi-MAI/Z-Image-Turbo",
prompt="A logo With Bold Large text: SGL Diffusion",
n=1,
response_format="b64_json",
)
# Save the generated image
image_bytes = base64.b64decode(response.data[0].b64_json)
with open("output.png", "wb") as f:
f.write(image_bytes)
SGLang integrates Cache-DiT, a caching acceleration engine for Diffusion Transformers (DiT), to achieve up to 7.4x inference speedup with minimal quality loss. You can set SGLANG_CACHE_DIT_ENABLED=True to enable it. For more details, please refer to the SGLang Cache-DiT documentation.
Basic Usage
SGLANG_CACHE_DIT_ENABLED=true sglang serve --model-path Tongyi-MAI/Z-Image-Turbo
Advanced Usage
Combined Configuration Example:
SGLANG_CACHE_DIT_ENABLED=true \
SGLANG_CACHE_DIT_FN=2 \
SGLANG_CACHE_DIT_BN=1 \
SGLANG_CACHE_DIT_WARMUP=4 \
SGLANG_CACHE_DIT_RDT=0.4 \
SGLANG_CACHE_DIT_MC=4 \
SGLANG_CACHE_DIT_TAYLORSEER=true \
SGLANG_CACHE_DIT_TS_ORDER=2 \
sglang serve --model-path Tongyi-MAI/Z-Image-Turbo
--dit-cpu-offload: Use CPU offload for DiT inference. Enable if run out of memory.--text-encoder-cpu-offload: Use CPU offload for text encoder inference.--vae-cpu-offload: Use CPU offload for VAE.--pin-cpu-memory: Pin memory for CPU offload. Only added as a temp workaround if it throws "CUDA error: invalid argument".Test Environment:
Server Command:
sglang serve --model-path Tongyi-MAI/Z-Image-Turbo \
--ulysses-degree=1 --ring-degree=1 --port 30000
Benchmark Command:
python3 -m sglang.multimodal_gen.benchmarks.bench_serving \
--backend sglang-image --dataset vbench --task text-to-image --num-prompts 1 --max-concurrency 1
Result:
================= Serving Benchmark Result =================
Task: text-to-image
Model: Tongyi-MAI/Z-Image-Turbo
Dataset: vbench
--------------------------------------------------
Benchmark duration (s): 1.84
Request rate: inf
Max request concurrency: 1
Successful requests: 1/1
--------------------------------------------------
Request throughput (req/s): 0.54
Latency Mean (s): 1.8435
Latency Median (s): 1.8435
Latency P99 (s): 1.8435
--------------------------------------------------
Peak Memory Max (MB): 30689.20
Peak Memory Mean (MB): 30689.20
Peak Memory Median (MB): 30689.20
============================================================
Benchmark Command:
python3 -m sglang.multimodal_gen.benchmarks.bench_serving \
--backend sglang-image --dataset vbench --task text-to-image --num-prompts 20 --max-concurrency 20
Result:
================= Serving Benchmark Result =================
Task: text-to-image
Model: Tongyi-MAI/Z-Image-Turbo
Dataset: vbench
--------------------------------------------------
Benchmark duration (s): 35.32
Request rate: inf
Max request concurrency: 20
Successful requests: 20/20
--------------------------------------------------
Request throughput (req/s): 0.57
Latency Mean (s): 18.5672
Latency Median (s): 18.5573
Latency P99 (s): 34.9880
--------------------------------------------------
Peak Memory Max (MB): 30689.26
Peak Memory Mean (MB): 30689.21
Peak Memory Median (MB): 30689.21
============================================================