docs/source/en/model_doc/olmo_hybrid.md
This model was released on {release_date} and added to Hugging Face Transformers on 2026-02-25.
<div style="float: right;"> <div class="flex flex-wrap space-x-1"></div>
OLMo Hybrid is a hybrid architecture model from Ai2 that combines standard transformer attention layers with linear attention layers using the Gated Deltanet. This hybrid approach aims to improve efficiency while maintaining model quality by interleaving full attention layers with linear attention layers.
[!TIP] For optimal performance, install the flash-linear-attention library. The model will work without it using a PyTorch fallback, but FLA provides significant speedups for the linear attention layers.
The example below demonstrates how to generate text with [Pipeline], [AutoModel] and from the command line.
pipe = pipeline( task="text-generation", model="allenai/OLMo-Hybrid-7B", device=0, )
result = pipe("Plants create energy through a process known as") print(result)
</hfoption>
<hfoption id="AutoModel">
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"allenai/Olmo-Hybrid-7B"
)
model = AutoModelForCausalLM.from_pretrained(
"allenai/Olmo-Hybrid-7B",
device_map="auto",
)
input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to(model.device)
output = model.generate(**input_ids, max_new_tokens=50)
print(tokenizer.decode(output[0], skip_special_tokens=True))
pip install flash-linear-attention
OlmoHybridDynamicCache) that handles both KV cache for attention layers and recurrent state for linear attention layers.[[autodoc]] OlmoHybridConfig
[[autodoc]] OlmoHybridModel - forward
[[autodoc]] OlmoHybridForCausalLM - forward