docs/source/en/model_doc/nemotron_h.md
This model was released on 2025-12-15 and added to Hugging Face Transformers on 2026-03-02.
<div style="float: right;"> <div class="flex flex-wrap space-x-1"></div>
NemotronH is a hybrid architecture combining attention and state-space layers for efficient long-context language modeling. It interleaves Mamba2 and transformer blocks, using a fixed ratio to balance expressiveness with linear-time sequence processing.
The example below demonstrates how to generate text with [Pipeline] or the [AutoModelForCausalLM] class.
from transformers import pipeline
pipe = pipeline(
task="text-generation",
model="nvidia/Nemotron-H-8B-Reasoning-128K",
)
pipe("Plants create energy through a process known as")
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("nvidia/Nemotron-H-8B-Reasoning-128K")
model = AutoModelForCausalLM.from_pretrained(
"nvidia/Nemotron-H-8B-Reasoning-128K",
device_map="auto",
)
input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to(model.device)
output = model.generate(**input_ids, max_new_tokens=50)
print(tokenizer.decode(output[0], skip_special_tokens=True))
[[autodoc]] NemotronHConfig
[[autodoc]] NemotronHModel - forward
[[autodoc]] NemotronHForCausalLM - forward