docs/source/en/model_doc/dots1.md
This model was released on 2025-06-06 and added to Hugging Face Transformers on 2025-06-25.
<div style="float: right;"> <div class="flex flex-wrap space-x-1"></div>
dots.llm1 is a 142B-parameter mixture-of-experts model that activates 14B parameters per token, using top-6-of-128 routed experts plus 2 shared experts. It delivers performance on par with Qwen2.5-72B while significantly reducing training and inference costs. Notably, no synthetic data was used during pretraining.
The example below demonstrates how to generate text with [Pipeline] or the [AutoModelForCausalLM] class.
from transformers import pipeline
pipe = pipeline(
task="text-generation",
model="rednote-hilab/dots.llm1.base",
)
pipe("The advantage of mixture-of-experts models is")
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("rednote-hilab/dots.llm1.base")
model = AutoModelForCausalLM.from_pretrained(
"rednote-hilab/dots.llm1.base",
device_map="auto",
)
input_ids = tokenizer("The advantage of mixture-of-experts models is", return_tensors="pt").to(model.device)
output = model.generate(**input_ids, max_new_tokens=50)
print(tokenizer.decode(output[0], skip_special_tokens=True))
[[autodoc]] Dots1Config
[[autodoc]] Dots1Model - forward
[[autodoc]] Dots1ForCausalLM - forward