recipes/mls/README.md
Multilingual LibriSpeech (MLS) dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of 8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish. It is available at OpenSLR.
This directory contains pretrained monolingual models and also steps to reproduce the results. All the models are trained using 32GB Nvidia V100 GPUs. We have used a total of 64 GPUs for training English, German, Dutch, Spanish, French models and 16 GPUs for training models on Italian, Portuguese and Polish.
| Language | Token Set | Train Lexicon | Joint Lexicon (Train + GB) |
|---|---|---|---|
| English | tokens.txt | train_lexicon.txt | joint_lexicon.txt |
| German | tokens.txt | train_lexicon.txt | joint_lexicon.txt |
| Dutch | tokens.txt | train_lexicon.txt | joint_lexicon.txt |
| French | tokens.txt | train_lexicon.txt | joint_lexicon.txt |
| Spanish | tokens.txt | train_lexicon.txt | joint_lexicon.txt |
| Italian | tokens.txt | train_lexicon.txt | joint_lexicon.txt |
| Portuguese | tokens.txt | train_lexicon.txt | joint_lexicon.txt |
| Polish | tokens.txt | train_lexicon.txt | joint_lexicon.txt |
| Language | Architecture | Acoustic Model |
|---|---|---|
| English | arch.txt | am.bin |
| German | arch.txt | am.bin |
| Dutch | arch.txt | am.bin |
| French | arch.txt | am.bin |
| Spanish | arch.txt | am.bin |
| Italian | arch.txt | am.bin |
| Portuguese | arch.txt | am.bin |
| Polish | arch.txt | am.bin |
The 5-gram_lm.arpa from the tar ball should be used to decode each acoustic model. For faster loading, people may convert those arpa files into binary format following the steps here.
| Language | Language Model |
|---|---|
| English | mls_lm_english.tar.gz |
| German | mls_lm_german.tar.gz |
| Dutch | mls_lm_dutch.tar.gz |
| French | mls_lm_french.tar.gz |
| Spanish | mls_lm_spanish.tar.gz |
| Italian | mls_lm_italian.tar.gz |
| Portuguese | mls_lm_portuguese.tar.gz |
| Polish | mls_lm_polish.tar.gz |
Follow the steps here to download and prepare the datset for a given language.
[...]/flashlight/build/bin/asr/fl_asr_train train --flagsfile=train/[lang].cfg --minloglevel=0 --logtostderr=1
[...]/flashlight/build/bin/asr/fl_asr_test --am=[...]/am.bin --lexicon=[...]/train_lexicon.txt --datadir=[...] --test=test.lst --tokens=[...]/tokens.txt --emission_dir='' --nouselexicon --show
[...]/flashlight/build/bin/asr/fl_asr_decode --flagsfile=decode/[lang].cfg
@article{Pratap2020MLSAL,
title={MLS: A Large-Scale Multilingual Dataset for Speech Research},
author={Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert},
journal={ArXiv},
year={2020},
volume={abs/2012.03411}
}
NOTE: We have made few updates to the MLS dataset after our INTERSPEECH paper was submitted to include more number of hours and also to improve the quality of transcripts. To avoid confusion (by having multiple versions), we are making ONLY one release with all the improvements included. For accurate dataset statistics and baselines, please refer to the arXiv paper above.