Back to Vibevoice

VibeVoice-ASR

docs/vibevoice-asr.md

latest5.1 KB
Original Source

VibeVoice-ASR

VibeVoice-ASR is a unified speech-to-text model designed to handle 60-minute long-form audio in a single pass, generating structured transcriptions containing Who (Speaker), When (Timestamps), and What (Content), with support for Customized Hotwords and over 50 languages.

Model: VibeVoice-ASR-7B

Demo: VibeVoice-ASR-Demo

Report: VibeVoice-ASR-Report

Finetuning: finetune-guide

vLLM: vLLM-asr

🔥 Key Features

  • 🕒 60-minute Single-Pass Processing: Unlike conventional ASR models that slice audio into short chunks (often losing global context), VibeVoice ASR accepts up to 60 minutes of continuous audio input within 64K token length. This ensures consistent speaker tracking and semantic coherence across the entire hour.

  • 👤 Customized Hotwords: Users can provide customized hotwords (e.g., specific names, technical terms, or background info) to guide the recognition process, significantly improving accuracy on domain-specific content.

  • 📝 Rich Transcription (Who, When, What): The model jointly performs ASR, diarization, and timestamping, producing a structured output that indicates who said what and when.

  • 🌍 Multilingual & Code-Switching Support: It supports over 50 languages, requires no explicit language setting, and natively handles code-switching within and across utterances. Language distribution can be found here.

🏗️ Model Architecture

<p align="center"> </p>

Demo

<div align="center" id="vibevoice-asr">

https://github.com/user-attachments/assets/acde5602-dc17-4314-9e3b-c630bc84aefa

</div>

Evaluation

<p align="center"> </p>

Installation

We recommend to use NVIDIA Deep Learning Container to manage the CUDA environment.

  1. Launch docker
bash
# NVIDIA PyTorch Container 24.07 ~ 25.12 verified. 
# Previous versions are also compatible.
sudo docker run --privileged --net=host --ipc=host --ulimit memlock=-1:-1 --ulimit stack=-1:-1 --gpus all --rm -it  nvcr.io/nvidia/pytorch:25.12-py3

## If flash attention is not included in your docker environment, you need to install it manually
## Refer to https://github.com/Dao-AILab/flash-attention for installation instructions
# pip install flash-attn --no-build-isolation
  1. Install from github
bash
git clone https://github.com/microsoft/VibeVoice.git
cd VibeVoice

pip install -e .

Usages

Usage 1: Launch Gradio demo

bash
apt update && apt install ffmpeg -y # for demo

python demo/vibevoice_asr_gradio_demo.py --model_path microsoft/VibeVoice-ASR --share

Usage 2: Inference from files directly

bash
python demo/vibevoice_asr_inference_from_file.py --model_path microsoft/VibeVoice-ASR --audio_files [add a audio path here] 

Finetuning

LoRA (Low-Rank Adaptation) fine-tuning is supported. See Finetuning for detailed guide.

Results

Multilingual

DatasetLanguageDERcpWERtcpWERWER
MLC-ChallengeEnglish4.2811.4813.027.99
MLC-ChallengeFrench3.8018.8019.6415.21
MLC-ChallengeGerman1.0417.1017.2616.30
MLC-ChallengeItalian2.0815.7615.9113.91
MLC-ChallengeJapanese0.8215.3315.4114.69
MLC-ChallengeKorean4.5215.3516.079.65
MLC-ChallengePortuguese7.9829.9131.6521.54
MLC-ChallengeRussian0.9012.9412.9812.40
MLC-ChallengeSpanish2.6710.5111.718.04
MLC-ChallengeThai4.0914.9115.5713.61
MLC-ChallengeVietnamese0.1614.5714.5714.43

DatasetLanguageDERcpWERtcpWERWER
AISHELL-4Chinese6.7724.9925.3521.40
AMI-IHMEnglish11.9220.4120.8218.81
AMI-SDMEnglish13.4328.8229.8024.65
AliMeetingChinese10.9229.3329.5127.40
MLC-ChallengeAverage3.4214.8115.6612.07

Language Distribution

<p align="center"> </p>

📄 License

This project is licensed under the MIT License.