docs/source/en/model_doc/musicflamingo.md
This model was released on 2025-11-03 and added to Hugging Face Transformers on 2026-03-30.
Music Flamingo is a fully open large audio–language model designed for robust understanding and reasoning over music. It builds upon the Audio Flamingo 3 architecture by including Rotary Time Embeddings (RoTE), which injects temporal position information to enable the model to handle audio sequences up to 20 minutes (1200 seconds).
The model checkpoint is available at: nvidia/music-flamingo-2601-hf
Highlights:
<|sound_bos|> and <|sound_eos|>) for improved audio sequence modeling.This model was contributed by Lasha Koroshinadze and Eric Bezzam.
Music Flamingo: Scaling Music Understanding in Audio Language Models
S. Ghosh, A. Goel, L. Koroshinadze, S. Lee, Z. Kong, J. F. Santos, R. Duraiswami, D. Manocha, W. Ping, M. Shoeybi, B. Catanzaro
NVIDIA and University of Maryland
Project: https://research.nvidia.com/labs/adlr/MF/
The model supports audio-text instructions, including multi-turn interactions, all processed in batches.
➡️ audio + text instruction
from transformers import AutoProcessor, MusicFlamingoForConditionalGeneration
model_id = "nvidia/music-flamingo-2601-hf"
processor = AutoProcessor.from_pretrained(model_id)
model = MusicFlamingoForConditionalGeneration.from_pretrained(model_id, device_map="auto")
conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this track in full detail - tell me the genre, tempo, and key, then dive into the instruments, production style, and overall mood it creates."},
{"type": "audio", "path": "https://huggingface.co/datasets/nvidia/AudioSkills/resolve/main/assets/song_1.mp3"},
],
}
]
inputs = processor.apply_chat_template(
conversation,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
).to(model.device)
inputs["input_features"] = inputs["input_features"].to(model.dtype)
outputs = model.generate(**inputs, max_new_tokens=500)
decoded_outputs = processor.batch_decode(outputs[:, inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(decoded_outputs)
➡️ multi-turn:
from transformers import AutoProcessor, MusicFlamingoForConditionalGeneration
model_id = "nvidia/music-flamingo-2601-hf"
processor = AutoProcessor.from_pretrained(model_id)
model = MusicFlamingoForConditionalGeneration.from_pretrained(model_id, device_map="auto")
conversation = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Write a rich caption that blends the technical details (genre, BPM, key, chords, mix) with how the song feels emotionally and dynamically as it unfolds.",
},
{"type": "audio", "path": "https://huggingface.co/datasets/nvidia/AudioSkills/resolve/main/assets/song_1.mp3"},
],
},
{
"role": "assistant",
"content": [{"type": "text", "text": "This energetic Eurodance anthem at 150 BPM in E major combines bright synth arpeggios with a punchy four-on-the-floor beat..."}],
},
{
"role": "user",
"content": [
{"type": "text", "text": "What instruments stand out the most?"},
],
},
]
inputs = processor.apply_chat_template(
conversation,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
).to(model.device)
inputs["input_features"] = inputs["input_features"].to(model.dtype)
outputs = model.generate(**inputs, max_new_tokens=500)
decoded_outputs = processor.batch_decode(outputs[:, inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(decoded_outputs)
➡️ batched inference!
from transformers import AutoProcessor, MusicFlamingoForConditionalGeneration
model_id = "nvidia/music-flamingo-2601-hf"
processor = AutoProcessor.from_pretrained(model_id)
model = MusicFlamingoForConditionalGeneration.from_pretrained(model_id, device_map="auto")
conversations = [
[
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this track in full detail - tell me the genre, tempo, and key, then dive into the instruments, production style, and overall mood it creates."},
{
"type": "audio",
"path": "https://huggingface.co/datasets/nvidia/AudioSkills/resolve/main/assets/song_1.mp3",
},
],
}
],
[
{
"role": "user",
"content": [
{
"type": "text",
"text": "Generate a structured lyric sheet from the input music.",
},
{"type": "audio", "path": "https://huggingface.co/datasets/nvidia/AudioSkills/resolve/main/assets/song_2.mp3"},
],
}
],
]
inputs = processor.apply_chat_template(
conversations,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
).to(model.device)
inputs["input_features"] = inputs["input_features"].to(model.dtype)
outputs = model.generate(**inputs, max_new_tokens=500)
decoded_outputs = processor.batch_decode(outputs[:, inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(decoded_outputs)
➡️ Training:
from transformers import AutoProcessor, MusicFlamingoForConditionalGeneration
model_id = "nvidia/music-flamingo-2601-hf"
processor = AutoProcessor.from_pretrained(model_id)
model = MusicFlamingoForConditionalGeneration.from_pretrained(model_id, device_map="auto")
model.train()
conversation = [
[
{
"role": "user",
"content": [
{"type": "text", "text": "Break the track down like a critic - list its tempo, key, and chordal motion, then explain the textures, dynamics, and emotional impact of the performance."},
{"type": "audio", "path": "https://huggingface.co/datasets/nvidia/AudioSkills/resolve/main/assets/song_1.mp3"},
],
},
{
"role": "assistant",
"content": [{"type": "text", "text": "This Eurodance track operates at 150 BPM in E major, with harmonic movement centering on the I-vi-IV-V family. The production features layered synth arpeggios, a four-on-the-floor kick pattern, and a mezzo-soprano lead vocal with bright timbre. Dynamically, the track builds through verses into an anthemic chorus with full synth orchestration and backing vocals, creating an uplifting, euphoric atmosphere characteristic of late 2000s dance-pop."}],
}
],
[
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this song from both a technical and artistic lens: mention tempo, harmony, and instrumentation, but also mood, lyrical themes, and structure.",
},
{"type": "audio", "path": "https://huggingface.co/datasets/nvidia/AudioSkills/resolve/main/assets/song_2.mp3"},
],
},
{
"role": "assistant",
"content": [{"type": "text", "text": "This electronic pop track combines upbeat production with playful lyrical themes centered around late-night pizza cravings. The structure follows a verse-chorus format with recurring melodic motifs and rhythmic patterns that emphasize the celebratory, lighthearted mood of the piece."}],
}
]
]
inputs = processor.apply_chat_template(
conversation,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
output_labels=True,
).to(model.device)
inputs["input_features"] = inputs["input_features"].to(model.dtype)
loss = model(**inputs).loss
loss.backward()
Audio Encoder Whisper-style feature extractor + encoder → average-pool over time (stride 2) → LayerNorm. Produces per-frame hidden states at the post-pool rate.
Rotary Time Embeddings (RoTE) Applied to the encoder output to inject temporal position information, enabling the model to handle audio sequences up to 20 minutes (1200 seconds). RoTE uses 2D axial rotary embeddings for batch and time dimensions with time-based angle modulation.
MusicFlamingoMultiModalProjector A small MLP that maps encoder features to the language model's hidden size.
MusicFlamingoForConditionalGeneration
A causal language model that accepts text embeddings where each audio placeholder token slot is replaced, in place, by an audio frame embedding. Uses special boundary tokens (<|sound_bos|> and <|sound_eos|>) to mark audio sequences. No sequence-length change is introduced by fusion.
chunk_length (seconds) and sampling_rate (Hz).post_pool_len that the encoder will output (matching the conv/pool schedule).Important: Maximum audio length is 20 minutes. Audio longer than this will be truncated.
The default setup processes 30-second windows at 16 kHz mono.
The processor enforces a hard limit of 40 windows per sample, resulting in a maximum of 20 minutes of audio (40 windows × 30 seconds).
Rotary Time Embeddings (RoTE) provide position information for sequences up to 20 minutes (1200 seconds).
For each window:
mel_len is the padded mel length.conv_output_len = (mel_len - 1) // 2 + 1.post_pool_len = (conv_output_len - 2) // 2 + 1.post_pool_len across all windows.[[autodoc]] MusicFlamingoConfig
[[autodoc]] MusicFlamingoProcessor
[[autodoc]] MusicFlamingoForConditionalGeneration - forward