docs/source/en/model_doc/afmoe.md
This model was released on {release_date} and added to Hugging Face Transformers on 2025-11-29.
<div style="float: right;"> <div class="flex flex-wrap space-x-1"></div>
AFMoE (Arcee Foundational Mixture of Experts) is a decoder-only transformer model that extends the Llama architecture with a sparse Mixture of Experts (MoE) approach. The model combines token-choice routing with shared experts and employs several architectural innovations for efficient inference and improved performance.
AFMoE introduces several key modifications to the standard transformer architecture:
The model supports extended context lengths with RoPE embeddings and includes all standard Transformers features including Flash Attention 2, SDPA, gradient checkpointing, and quantization support.
[!TIP] AFMoE is particularly well-suited for scenarios requiring efficient scaling through sparsity while maintaining strong performance. The shared experts provide a stable computation baseline while routed experts enable model capacity scaling.
The example below demonstrates how to generate text with AFMoE using [Pipeline] or the [AutoModel].
from transformers import pipeline
pipeline = pipeline(
task="text-generation",
model="arcee-ai/Trinity-Mini",
device=0
)
output = pipeline("The key innovation in mixture of experts is")
print(output[0]["generated_text"])
import torch
from transformers import AfmoeForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("arcee-ai/Trinity-Mini")
model = AfmoeForCausalLM.from_pretrained(
"arcee-ai/Trinity-Mini",
device_map="auto"
)
inputs = tokenizer("The key innovation in mixture of experts is", return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
AFMoE uses token-choice routing where each token independently selects top-k experts based on router logits. The routing mechanism includes:
Unlike standard MoE models, AFMoE includes shared experts that are always activated for every token, providing:
The hybrid attention pattern alternates between:
global_attn_every_n_layers) for global contextAll attention layers include Q/K normalization and output gating for improved training dynamics.
[[autodoc]] AfmoeConfig
[[autodoc]] AfmoeModel - forward
[[autodoc]] AfmoeForCausalLM - forward