docs/source/en/model_doc/glm_moe_dsa.md
This model was released on 2026-02-17 and added to Hugging Face Transformers on 2026-02-09.
<div style="float: right;"> <div class="flex flex-wrap space-x-1"></div>
GlmMoeDsa (GLM-5) is a 744B-parameter mixture-of-experts model with 40B active parameters per token, using DeepSeek Sparse Attention (DSA) for efficient 200K-token context handling. It was trained entirely on Huawei Ascend chips and matches frontier-level performance on reasoning and long-context benchmarks.
The example below demonstrates how to generate text with [Pipeline] or the [AutoModelForCausalLM] class.
from transformers import pipeline
pipe = pipeline(
task="text-generation",
model="zai-org/GLM-5",
)
pipe("The theory of relativity states that")
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("zai-org/GLM-5")
model = AutoModelForCausalLM.from_pretrained(
"zai-org/GLM-5",
device_map="auto",
)
input_ids = tokenizer("The theory of relativity states that", return_tensors="pt").to(model.device)
output = model.generate(**input_ids, max_new_tokens=50)
print(tokenizer.decode(output[0], skip_special_tokens=True))
[[autodoc]] GlmMoeDsaConfig
[[autodoc]] GlmMoeDsaModel - forward
[[autodoc]] GlmMoeDsaForCausalLM - forward