docs/source/en/model_doc/glm4_moe.md
This model was released on 2025-07-28 and added to Hugging Face Transformers on 2025-07-21.
GLM-4.7, GLM-4.6 and GLM-4.5 language model use this class. The implementation in transformers does not include an MTP layer.
GLM-4.7, your new coding partner, is coming with the following features:
More general, one would also witness significant improvements in many other scenarios such as chat, creative writing, and role-play scenario.
Interleaved Thinking & Preserved Thinking
GLM-4.7 further enhances Interleaved Thinking (a feature introduced since GLM-4.5) and introduces Preserved Thinking and Turn-level Thinking. By thinking between actions and staying consistent across turns, it makes complex tasks more stable and more controllable:
More details: https://docs.z.ai/guides/capabilities/thinking-mode
For more eval results, show cases, and technical details, please visit GLM-4.7 technical blog.
Compared with GLM-4.5, GLM-4.6 brings several key improvements:
We evaluated GLM-4.6 across eight public benchmarks covering agents, reasoning, and coding. Results show clear gains over GLM-4.5, with GLM-4.6 also holding competitive advantages over leading domestic and international models such as DeepSeek-V3.1-Terminus and Claude Sonnet 4.
For more eval results, show cases, and technical details, please visit GLM-4.6 technical blog.
The GLM-4.5 series models are foundation models designed for intelligent agents, MoE variants are documented here as Glm4Moe.
GLM-4.5 has 355 billion total parameters with 32 billion active parameters, while GLM-4.5-Air adopts a more compact design with 106 billion total parameters and 12 billion active parameters. GLM-4.5 models unify reasoning, coding, and intelligent agent capabilities to meet the complex demands of intelligent agent applications.
Both GLM-4.5 and GLM-4.5-Air are hybrid reasoning models that provide two modes: thinking mode for complex reasoning and tool usage, and non-thinking mode for immediate responses.
We have open-sourced the base models, hybrid reasoning models, and FP8 versions of the hybrid reasoning models for both GLM-4.5 and GLM-4.5-Air. They are released under the MIT open-source license and can be used commercially and for secondary development.
As demonstrated in our comprehensive evaluation across 12 industry-standard benchmarks, GLM-4.5 achieves exceptional performance with a score of 63.2, in the 3rd place among all the proprietary and open-source models. Notably, GLM-4.5-Air delivers competitive results at 59.8 while maintaining superior efficiency.
For more eval results, show cases, and technical details, please visit our technical report or technical blog.
The model code, tool parser and reasoning parser can be found in the implementation of transformers, vLLM and SGLang.
[[autodoc]] Glm4MoeConfig
[[autodoc]] Glm4MoeModel - forward
[[autodoc]] Glm4MoeForCausalLM - forward