docs/source/en/model_doc/granite_speech.md
This model was released on 2025-04-16 and added to Hugging Face Transformers on 2025-04-11.
The Granite Speech model (blog post) is a multimodal language model, consisting of a speech encoder, speech projector, large language model, and LoRA adapter(s). More details regarding each component for the current (Granite 3.2 Speech) model architecture may be found below.
Speech Encoder: A Conformer encoder trained with Connectionist Temporal Classification (CTC) on character-level targets on ASR corpora. The encoder uses block-attention and self-conditioned CTC from the middle layer.
Speech Projector: A query transformer (q-former) operating on the outputs of the last encoder block. The encoder and projector temporally downsample the audio features to be merged into the multimodal embeddings to be processed by the llm.
Large Language Model: The Granite Speech model leverages Granite LLMs, which were originally proposed in this paper.
LoRA adapter(s): The Granite Speech model contains a modality specific LoRA, which will be enabled when audio features are provided, and disabled otherwise.
Note that most of the aforementioned components are implemented generically to enable compatibility and potential integration with other model architectures in transformers.
This model was contributed by Alexander Brooks, Avihu Dekel, and George Saon.
Granite Speech is a multimodal speech-to-text model that can transcribe audio and respond to text prompts. Here's how to use it:
from datasets import Audio, load_dataset
from transformers import GraniteSpeechForConditionalGeneration, GraniteSpeechProcessor
# Load model and processor
model = GraniteSpeechForConditionalGeneration.from_pretrained(
"ibm-granite/granite-3.2-8b-speech",
device_map="auto"
)
processor = GraniteSpeechProcessor.from_pretrained("ibm-granite/granite-3.2-8b-speech")
# Load audio from dataset (16kHz sampling rate required)
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.cast_column("audio", Audio(sampling_rate=processor.feature_extractor.sampling_rate))
audio = ds['audio'][0]['array']
# Process audio
inputs = processor(audio=audio, return_tensors="pt").to(model.device)
# Generate transcription
generated_ids = model.generate(**inputs, max_new_tokens=256)
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(transcription)
For instruction-following with audio, use the chat template with audio directly in the conversation format:
from datasets import Audio, load_dataset
from transformers import GraniteSpeechForConditionalGeneration, GraniteSpeechProcessor
model = GraniteSpeechForConditionalGeneration.from_pretrained(
"ibm-granite/granite-3.2-8b-speech",
device_map="auto"
)
processor = GraniteSpeechProcessor.from_pretrained("ibm-granite/granite-3.2-8b-speech")
# Load audio from dataset
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.cast_column("audio", Audio(sampling_rate=processor.feature_extractor.sampling_rate))
audio = ds['audio'][0]
# Prepare conversation with audio and text
conversation = [
{
"role": "user",
"content": [
{"type": "audio", "audio": audio},
{"type": "text", "text": "Transcribe the following audio:"},
],
}
]
# Apply chat template with audio - processor handles both tokenization and audio processing
inputs = processor.apply_chat_template(conversation, tokenize=True, return_tensors="pt").to(model.device)
# Generate transcription
generated_ids = model.generate(**inputs, max_new_tokens=512)
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(output_text)
Process multiple audio files efficiently:
from datasets import Audio, load_dataset
from transformers import GraniteSpeechForConditionalGeneration, GraniteSpeechProcessor
model = GraniteSpeechForConditionalGeneration.from_pretrained(
"ibm-granite/granite-3.2-8b-speech",
device_map="auto"
)
processor = GraniteSpeechProcessor.from_pretrained("ibm-granite/granite-3.2-8b-speech")
# Load multiple audio samples from dataset
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.cast_column("audio", Audio(sampling_rate=processor.feature_extractor.sampling_rate))
audio_samples = [ds['audio'][i]['array'] for i in range(3)]
# Process batch
inputs = processor(audio=audio_samples, return_tensors="pt", padding=True).to(model.device)
# Generate for all inputs
generated_ids = model.generate(**inputs, max_new_tokens=256)
transcriptions = processor.batch_decode(generated_ids, skip_special_tokens=True)
for i, transcription in enumerate(transcriptions):
print(f"Audio {i+1}: {transcription}")
[[autodoc]] GraniteSpeechConfig
[[autodoc]] GraniteSpeechEncoderConfig
[[autodoc]] GraniteSpeechProcessor - call
[[autodoc]] GraniteSpeechFeatureExtractor
[[autodoc]] GraniteSpeechForConditionalGeneration - forward - get_audio_features