docs/source/en/model_doc/lasr.md
This model was released on 2020-05-16 and added to Hugging Face Transformers on 2025-12-05.
<div class="flex flex-wrap space-x-1"> </div>LASR is the architecture behind MedASR, a speech-to-text model from Google Health AI pre-trained for medical dictation. It's based on the Conformer architecture and designed as a starting point for developers building dictation tools with medical terminology, like radiology dictation. MedASR performs well on medical audio but can struggle with terms outside its training data, such as non-standard medication names or temporal references (dates, times, or durations).
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="google/medasr")
out = pipe("path/to/audio.mp3")
print(out)
from datasets import Audio, load_dataset
from transformers import AutoModelForCTC, AutoProcessor
processor = AutoProcessor.from_pretrained("google/medasr")
model = AutoModelForCTC.from_pretrained("google/medasr", device_map="auto")
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.cast_column("audio", Audio(sampling_rate=processor.feature_extractor.sampling_rate))
speech_samples = [el['array'] for el in ds["audio"][:5]]
inputs = processor(speech_samples, sampling_rate=processor.feature_extractor.sampling_rate)
inputs.to(model.device, dtype=model.dtype)
outputs = model.generate(**inputs)
print(processor.batch_decode(outputs))
The example below prepares a batch of audio and text, passes it through the LASR/MedASR model, and computes the training loss.
from datasets import Audio, load_dataset
from transformers import AutoModelForCTC, AutoProcessor
# Load processor and model
processor = AutoProcessor.from_pretrained("google/medasr")
model = AutoModelForCTC.from_pretrained("google/medasr", device_map="auto")
# Load a small example dataset and prepare batch
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.cast_column("audio", Audio(sampling_rate=processor.feature_extractor.sampling_rate))
speech_samples = [el["array"] for el in ds["audio"][:5]]
text_samples = [el for el in ds["text"][:5]]
# Passing `text` to the processor will prepare the `labels`
inputs = processor(audio=speech_samples, text=text_samples, sampling_rate=processor.feature_extractor.sampling_rate)
inputs.to(device, dtype=model.dtype)
outputs = model(**inputs)
outputs.loss.backward()
[[autodoc]] LasrTokenizer
[[autodoc]] LasrFeatureExtractor - call
[[autodoc]] LasrProcessor - call - batch_decode - decode
[[autodoc]] LasrEncoderConfig
[[autodoc]] LasrCTCConfig
[[autodoc]] LasrEncoder
[[autodoc]] LasrForCTC