docs/source/en/model_doc/qwen3.md
This model was released on 2025-04-29 and added to Hugging Face Transformers on 2025-03-31.
<div style="float: right;"> <div class="flex flex-wrap space-x-1"></div>
Qwen3 is the dense model architecture in the Qwen3 family, available in sizes from 0.6B to 32B parameters. It supports both thinking mode (multi-step reasoning) and non-thinking mode, with seamless switching between the two. Qwen3 was trained on approximately 36T tokens covering 119 languages. See also the MoE variant Qwen3MoE.
The example below demonstrates how to generate text with [Pipeline] or the [AutoModelForCausalLM] class.
from transformers import pipeline
pipe = pipeline(
task="text-generation",
model="Qwen/Qwen3-0.6B",
)
pipe("The key to effective reasoning is")
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B")
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen3-0.6B",
device_map="auto",
)
input_ids = tokenizer("The key to effective reasoning is", return_tensors="pt").to(model.device)
output = model.generate(**input_ids, max_new_tokens=50)
print(tokenizer.decode(output[0], skip_special_tokens=True))
[[autodoc]] Qwen3Config
[[autodoc]] Qwen3Model - forward
[[autodoc]] Qwen3ForCausalLM - forward
[[autodoc]] Qwen3ForSequenceClassification - forward
[[autodoc]] Qwen3ForTokenClassification - forward
[[autodoc]] Qwen3ForQuestionAnswering - forward