recipes/streaming_convnets/tools/README.md
Once a model is trained in wav2letter++ for streaming TDS models using the provided recipe possibly customized to suit ones' use-case, the model needs to be serialized to a format which wav2letter@anywhere inference platform can load. StreamingTDSModelConverter can be used to do this. Note that the script only supports models trained using the streaming TDS + CTC style architectures as described in the paper here.
Build the tool with make streaming_tds_model_converter.
And to run the binary:
[path to binary]/streaming_tds_model_converter \
-am [path to model] \
--outdir [output directory]
The output directory will contain
tokens.txt - Tokens file (with blank symbol included)acoustic_model.bin - Serialized acoutic modelfeature_extractor.bin - Serialized feature extraction model which perform log-mel feature extraction and local normalizationThese files can be used to run inference on audio files along with a few other files required for decoding like language model, lexicon etc. See the tutorial for more details.