docs/source/en/model_doc/seed_oss.md
This model was released on {release_date} and added to Hugging Face Transformers on 2025-08-22.
<div style="float: right;"> <div class="flex flex-wrap space-x-1"></div>
SeedOss is ByteDance Seed's 36B-parameter dense language model with native 512K context length. It features flexible thinking budget control and strong reasoning and agent capabilities, trained on 12T tokens.
The example below demonstrates how to generate text with [Pipeline] or the [AutoModelForCausalLM] class.
from transformers import pipeline
pipe = pipeline(
task="text-generation",
model="ByteDance-Seed/Seed-OSS-36B-Base",
)
pipe("The most important factor in language model training is")
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ByteDance-Seed/Seed-OSS-36B-Base")
model = AutoModelForCausalLM.from_pretrained(
"ByteDance-Seed/Seed-OSS-36B-Base",
device_map="auto",
)
input_ids = tokenizer("The most important factor in language model training is", return_tensors="pt").to(model.device)
output = model.generate(**input_ids, max_new_tokens=50)
print(tokenizer.decode(output[0], skip_special_tokens=True))
[[autodoc]] SeedOssConfig
[[autodoc]] SeedOssModel - forward
[[autodoc]] SeedOssForCausalLM - forward
[[autodoc]] SeedOssForSequenceClassification - forward
[[autodoc]] SeedOssForTokenClassification - forward
[[autodoc]] SeedOssForQuestionAnswering - forward