docs/source/en/model_doc/hunyuan_v1_moe.md
This model was released on {release_date} and added to Hugging Face Transformers on 2025-08-22.
<div style="float: right;"> <div class="flex flex-wrap space-x-1"></div>
HunYuanMoEV1 is Tencent's mixture-of-experts language model with 80B total parameters and 13B active parameters per token. It uses fine-grained expert routing with Grouped Query Attention, supports 256K context length, and offers dual-mode reasoning (fast and slow thinking).
The example below demonstrates how to generate text with [Pipeline] or the [AutoModelForCausalLM] class.
from transformers import pipeline
pipe = pipeline(
task="text-generation",
model="tencent/Hunyuan-A13B-Instruct",
)
pipe("The future of artificial intelligence is")
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("tencent/Hunyuan-A13B-Instruct")
model = AutoModelForCausalLM.from_pretrained(
"tencent/Hunyuan-A13B-Instruct",
device_map="auto",
)
input_ids = tokenizer("The future of artificial intelligence is", return_tensors="pt").to(model.device)
output = model.generate(**input_ids, max_new_tokens=50)
print(tokenizer.decode(output[0], skip_special_tokens=True))
[[autodoc]] HunYuanMoEV1Config
[[autodoc]] HunYuanMoEV1Model - forward
[[autodoc]] HunYuanMoEV1ForCausalLM - forward
[[autodoc]] HunYuanMoEV1ForSequenceClassification - forward