docs/source/en/model_doc/wav2vec2.md
This model was released on 2020-06-20 and added to Hugging Face Transformers on 2021-02-02.
The Wav2Vec2 model was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
The abstract from the paper is the following:
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
This model was contributed by patrickvonplaten.
Note: Meta (FAIR) released a new version of Wav2Vec2-BERT 2.0 - it's pretrained on 4.5M hours of audio. We especially recommend using it for fine-tuning tasks, e.g. as per this guide.
Wav2Vec2CTCTokenizer].Flash Attention 2 is an faster, optimized version of the model.
First, check whether your hardware is compatible with Flash Attention 2. The latest list of compatible hardware can be found in the official documentation.
Next, install the latest version of Flash Attention 2:
pip install -U flash-attn --no-build-isolation
To load a model using Flash Attention 2, we can pass the argument attn_implementation="flash_attention_2" to .from_pretrained. We'll also load the model in half-precision (e.g. torch.float16), since it results in almost no degradation to audio quality but significantly lower memory usage and faster inference:
from transformers import Wav2Vec2Model
model = Wav2Vec2Model.from_pretrained("facebook/wav2vec2-large-960h-lv60-self", attn_implementation="flash_attention_2", device_map="auto")
...
Below is an expected speedup diagram comparing the pure inference time between the native implementation in transformers of the facebook/wav2vec2-large-960h-lv60-self model and the flash-attention-2 and sdpa (scale-dot-product-attention) versions. . We show the average speedup obtained on the librispeech_asr clean validation split:
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Wav2Vec2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
<PipelineTag pipeline="audio-classification"/>Wav2Vec2ForCTC] is supported by this example script and notebook.Wav2Vec2ForCTC] is supported by a notebook on how to finetune a speech recognition model in English, and how to finetune a speech recognition model in any language.🚀 Deploy
[[autodoc]] Wav2Vec2Config
[[autodoc]] Wav2Vec2CTCTokenizer - call - save_vocabulary - decode - batch_decode - set_target_lang
[[autodoc]] Wav2Vec2FeatureExtractor - call
[[autodoc]] Wav2Vec2Processor - call - pad - from_pretrained - save_pretrained - batch_decode - decode
[[autodoc]] Wav2Vec2ProcessorWithLM - call - pad - from_pretrained - save_pretrained - batch_decode - decode
If you are planning to decode multiple batches of audios, you should consider using [~Wav2Vec2ProcessorWithLM.batch_decode] and passing an instantiated multiprocessing.Pool.
Otherwise, [~Wav2Vec2ProcessorWithLM.batch_decode] performance will be slower than calling [~Wav2Vec2ProcessorWithLM.decode] for each audio individually, as it internally instantiates a new Pool for every call. See the example below:
# Let's see how to use a user-managed pool for batch decoding multiple audios
from multiprocessing import get_context
import datasets
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
# import model, feature extractor, tokenizer
model = AutoModelForCTC.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm", device_map="auto")
processor = AutoProcessor.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm")
# load example dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
dataset = dataset.cast_column("audio", datasets.Audio(sampling_rate=16_000))
def map_to_array(example):
example["speech"] = example["audio"]["array"]
return example
# prepare speech data for batch inference
dataset = dataset.map(map_to_array, remove_columns=["audio"])
def map_to_pred(batch, pool):
inputs = processor(batch["speech"], sampling_rate=16_000, padding=True, return_tensors="pt").to(model.device)
inputs = {k: v.to(model.device) for k, v in inputs.items()}
with torch.no_grad():
logits = model(**inputs).logits
transcription = processor.batch_decode(logits.cpu().numpy(), pool).text
batch["transcription"] = transcription
return batch
# note: pool should be instantiated *after* `Wav2Vec2ProcessorWithLM`.
# otherwise, the LM won't be available to the pool's sub-processes
# select number of processes and batch_size based on number of CPU cores available and on dataset size
with get_context("fork").Pool(processes=2) as pool:
result = dataset.map(
map_to_pred, batched=True, batch_size=2, fn_kwargs={"pool": pool}, remove_columns=["speech"]
)
result["transcription"][:2]
['MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL', "NOR IS MISTER COULTER'S MANNER LESS INTERESTING THAN HIS MATTER"]
[[autodoc]] models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput
[[autodoc]] models.wav2vec2.modeling_wav2vec2.Wav2Vec2BaseModelOutput
[[autodoc]] models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput
[[autodoc]] Wav2Vec2Model - forward
[[autodoc]] Wav2Vec2ForCTC - forward - load_adapter
[[autodoc]] Wav2Vec2ForSequenceClassification - forward
[[autodoc]] Wav2Vec2ForAudioFrameClassification - forward
[[autodoc]] Wav2Vec2ForXVector - forward
[[autodoc]] Wav2Vec2ForPreTraining - forward