Speech Recognition

Automatic Speech Recognition (ASR) takes as input an audio stream or audio buffer and returns one or more text transcripts, along with additional optional metadata. ASR represents a full speech recognition pipeline that is GPU accelerated with optimized performance and accuracy. ASR supports synchronous and streaming recognition modes.

Jarvis ASR features include:

  • Support for offline and streaming use cases

  • A streaming mode that returns intermediate transcripts with low latency

  • GPU-accelerated feature extraction

  • Multiple (and growing) acoustic model architecture options accelerated by NVIDIA TensorRT

  • Beam search decoder based on n-gram language models

  • Voice activity detection algorithms (CTC-based)

  • Automatic punctuation

  • Ability to return top-N transcripts from beam decoder

  • Word-level timestamps

  • Inverse Text Normalization (ITN)

For more information, refer to the Speech To Text notebook; an end-to-end workflow for speech recognition. This workflow starts with training in TLT and ends with deployment using Jarvis.

Model Architectures

Jasper

The Jasper model is an end-to-end neural acoustic model for ASR that provides near state-of-the-art results on LibriSpeech among end-to-end ASR models without any external data. The Jasper architecture of convolutional layers was designed to facilitate fast GPU inference, by allowing whole sub-blocks to be fused into a single GPU kernel. This is important for meeting the strict real-time requirements of ASR systems in deployment.

The results of the acoustic model are combined with the results of external language models to get the top-ranked word sequences corresponding to a given audio segment during a post-processing step called decoding.

Details on the model architecture can be found in the paper Jasper: An End-to-End Convolutional Neural Acoustic Model.

QuartzNet

QuartzNet is the next generation of the Jasper speech recognition model. It improves on Jasper by replacing 1D convolutions with 1D time-channel separable convolutions. Doing this effectively factorizes the convolution kernels, enabling deeper models while reducing the number of parameters by over an order of magnitude.

Details on the model architecture can be found in the paper QuartzNet: Deep Automatic Speech Recognition with 1D Time-Channel Separable Convolutions.

CitriNet

Citrinet is a new end-to-end convolutional Connectionist Temporal Classification (CTC) based automatic speech recognition (ASR) model. Citrinet is deep residual neural model which uses 1D time-channel separable convolutions combined with sub-word encoding and squeeze-and-excitation. The resulting architecture significantly reduces the gap between non-autoregressive and sequence-to-sequence and transducer models.

Details on the model architecture can be found in the paper Citrinet: Closing the Gap between Non-Autoregressive and Autoregressive End-to-End Models for Automatic Speech Recognition.

Normalization

Jarvis implements inverse text normalization (ITN) for ASR requests using weight finite state transducers (WSFT) based models to convert spoken domain output from an ASR model into written domain text to improve readability of the ASR systems output.

Details on the model archiecture can be found in the paper NeMo Inverse Text Normalization: From Development To Production.

Services

Jarvis ASR supports both offline/batch and streaming inference modes.

Offline Recognition

In synchronous mode, the full audio signal is first read from a file or captured from a microphone. Following the capture of the entire signal, the client makes a request to the Jarvis Speech Server to transcribe it. The client then waits for the response from the server.

Note

This method can have long latency since the processing of the audio signal starts after the full audio signal has been

captured or read from the file.

Streaming Recognition

In streaming recognition mode, as soon as an audio segment of a specified length is captured or read, a request is made to the server to process that segment. On the server side, a response is returned as soon as an intermediate transcript is available.

Note

You can select the length of the audio segments based on speed and memory requirements.

Refer to the jarvis/proto/jarvis_asr.proto documentation for more details.

Pipeline Configuration

In the simplest use case, you can deploy an ASR model without any language model as follows:

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key>  \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --acoustic_model_name=<acoustic_model_name>

where:

  • <encryption_key> is the encryption key used during the export of the .ejrvs file.

  • <pipeline_name> and <acoustic_model_name> are optional user-defined names for the components in the model repository.

    Note

    <acoustic_model_name> is global and can conflict across model pipelines. Override this only in cases when you

    know what other models will be deployed and there will not be any incompatibilities in model weights or input shapes.

  • <ejrvs_filename> is the name of the ejrvs file to use as input.

  • <jmir_filename> is the Jarvis jmir file that is generated.

Upon succesful completion of this command, a file named <jmir_filename> is created in the /servicemaker-dev/ folder. Since no language model is specified, the Jarvis greedy decoder is used to predict the transcript based on the output of the acoustic model. If your .ejrvs archives are encrypted you need to include :<encryption_key> at the end of the JMIR filename and ejrvs filename. Otherwise this is unnecessary.

Streaming/Offline Configuration

By default, the Jarvis JMIR file is configured to be used with the Jarvis StreamingRecognize RPC call, for streaming use cases. To use the Recognize RPC call, generate the Jarvis JMIR file by adding the --offline option.

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key> \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --offline

Furthermore, the default streaming Jarvis JMIR configuration is to provide intermediate transcripts with very low latency. For use cases where being able to support additional concurrent audio streams is more important, run:

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key> \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --chunk_size=0.8 \
    --padding_factor=2 \
    --padding_size=0.8

Citrinet Acoustic Model

The Citrinet acoustic model has different properties than Jasper and Quartznet. We recommend the following jarvis-build parameters to export Citrinet for low-latency streaming recognition:

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key> \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --chunk_size=0.16 \
    --padding_factor=24 \
    --padding_size=1.92 \
    --ms_per_timestep=80 \
    --lm_decoder_cpu.asr_model_delay=-1 \
    --featurizer.use_utterance_norm_params=False \
    --featurizer.precalc_norm_time_steps=0 \
    --featurizer.precalc_norm_params=False \
    --lm_decoder_cpu.decoder_type=greedy"

For high throughput streaming recognition, chunk_size, padding_size and padding_factor can be set as follows:

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key> \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --chunk_size=0.8 \
    --padding_factor=4 \
    --padding_size=1.6 \
    --ms_per_timestep=80 \
    --lm_decoder_cpu.asr_model_delay=-1 \
    --featurizer.use_utterance_norm_params=False \
    --featurizer.precalc_norm_time_steps=0 \
    --featurizer.precalc_norm_params=False \
    --lm_decoder_cpu.decoder_type=greedy"

Finally, for offline recogition, we recommend the following settings:

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key> \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --offline \
    --name=<pipeline_name> \
    --chunk_size=1.6 \
    --padding_factor=2 \
    --padding_size=1.6 \
    --ms_per_timestep=80 \
    --lm_decoder_cpu.asr_model_delay=-1 \
    --featurizer.use_utterance_norm_params=False \
    --featurizer.precalc_norm_time_steps=0 \
    --featurizer.precalc_norm_params=False \
    --lm_decoder_cpu.decoder_type=greedy"

Language Models

Jarvis ASR supports decoding with an n-gram language model. The n-gram language model can be stored in a .arpa format or a KenLM binary format. For more information on buliding language models, see Training Language Models.

To prepare the Jarvis JMIR configuration using an n-gram language model stored in arpa format, run:

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key>  \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --decoding_language_model_arpa=<arpa_filename>

To use Jarvis ASR with a KenLM binary file, generate the Jarvis JMIR with:

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key> \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --decoding_language_model_binary=<KenLM_binary_filename>

The decoder language model hyperparameters (alpha, beta, and beam_search_width) can also be set from the jarvis-build command.

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key> \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --decoding_language_model_binary=<KenLM_binary_filename> \
    --lm_decoder_cpu.beam_search_width=<beam_search_width> \
    --lm_decoder_cpu.language_model_alpha=<language_model_alpha> \
    --lm_decoder_cpu.language_model_beta=<language_model_beta>

GPU-accelerated Decoder

The Jarvis ASR pipeline can also use a GPU-accelerated weighted finite-state transducer (WFST) decoder that was initially developed for Kaldi. To use the GPU decoder, using a language model defined by an .arpa file, run:

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key> \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --decoding_language_model_arpa=<decoding_lm_arpa_filename> \
    --gpu_decoder

where <decoding_lm_arpa_filename> is the language model .arpa file that was used during the WFST decoding phase.

Note

Conversion from an .arpa file to a WFST graph can take a very long time, especially for large language models.

Also, large language models will increase GPU memory utilization. When using the GPU decoder, it is recommended to use different language models for the WFST decoding phase and the lattice rescoring phase. This can be achieved by using the following jarvis-build command:

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key> \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --decoding_language_model_arpa=<decoding_lm_arpa_filename> \
    --rescoring_language_model_arpa=<rescoring_lm_arpa_filename> \
    --gpu_decoder

where:

  • <decoding_lm_arpa_filename> is the language model .arpa file that was used during the WFST decoding phase

  • <rescoring_lm_arpa_filename> is the language model used during the lattice rescoring phase

Typically, one would use a small language model for the WFST decoding phase (for example, a pruned 2 or 3-gram language model) and a larger language model for the lattice rescoring phase (for example, an unpruned 4-gram language model).

For advanced users, it is also possible to configure the GPU decoder by specifying the decoding WFST file and the vocabulary directly, instead of using an .arpa file. For example:

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key> \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --decoding_language_model_fst=<decoding_lm_fst_filename> \
    --decoding_language_model_words=<decoding_lm_words_file> \
    --gpu_decoder

Furthermore, you can specify the .carpa files to use in the case where lattice rescoring is needed:

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key> \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --decoding_language_model_fst=<decoding_lm_fst_filename> \
    --decoding_language_model_carpa=<decoding_lm_carpa_filename> \
    --decoding_language_model_words=<decoding_lm_words_filename> \
    --rescoring_language_model_carpa=<rescoring_lm_carpa_filename> \
    --gpu_decoder

where:

  • <decoding_lm_carpa_filename> is the language model construct .arpa representation to use during the WFST decoding phase

  • <rescoring_lm_carpa_filename> is the language model construct .arpa representation to use during the lattice rescoring phase

The GPU decoder hyperparameters (default_beam, lattice_beam, word_insertion_penalty and acoustic_scale) can be set with the jarvis-build command as follows:

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key> \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --decoding_language_model_arpa=<decoding_lm_arpa_filename> \
    --lattice_beam=<lattice_beam> \
    --lm_decoder_gpu.default_beam=<default_beam> \
    --lm_decoder_gpu.acoustic_scale=<acoustic_scale> \
    --rescorer.word_insertion_penalty=<word_insertion_penalty> \
    --gpu_decoder

Beginning/End of Utterance Detection

Jarvis ASR uses an algorithm that detects the beginning and end of utterances. This algorithm is used to reset the ASR decoder state, and to trigger a call to the punctuator model. By default, the beginning of an utterance is flagged when 20% of the frames in a 300ms window and has non-blank characters, and the end of an utterance is flagged when 98% of the frames in a 800ms window are blank characters. You can tune those values for their particular use case by using the following jarvis-build command:

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key> \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --vad.vad_start_history=300 \
    --vad.vad_start_th=0.2 \
    --vad.vad_stop_history=800 \
    --vad.vad_stop_th=0.98

Additionally, it is possible to disable the beginning/end of utterance detection with the following code:

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key> \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --vad.vad_type=none

Note that in this case, the decoder state would only get reset after the full audio signal has been sent by the client. Similarly, the punctuator model would only get called once.

Inverse Text Normalization

Currently, the grammars are limited to English. In a future release, additional information on training, tuning, and loading custom grammars will be available.

Non-English Languages

The default parameter values that can be provided to the jarvis-build command will give accurate transcripts for most use cases. However, for some languages like Mandarin, those parameter values must be tuned. When transcribing Mandarin, the recommended values are:

For streaming recognition

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key> \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --chunk_size=1.6 \
    --padding_size=3.2 \
    --padding_factor=4  \
    --vad.vad_stop_history=1600 \
    --vad.vad_start_history=200 \
    --vad.vad_start_th=0.1

For offline recognition

jarvis-build speech_recognition \
    /servicemaker-dev/<jmir_filename>:<encryption_key> \
    /servicemaker-dev/<ejrvs_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --offline \
    --padding_size=3.2 \
    --padding_factor=2 \
    --vad.vad_stop_history=1600 \
    --vad.vad_start_history=200 \
    --vad.vad_start_th=0.1

Selecting Custom Model at Runtime

When receiving requests from the client application, the Jarvis server selects the deployed ASR model to use based on the RecognitionConfig of the client request. If no models are available to fulfill the request, an error is returned. In the case where multiple models might be able to fulfill the client request, one model is selected at random. You can also explicitly select which ASR model to use by setting the model field of the RecognitionConfig protobuf object to the value of <pipeline_name> which was used with the jarvis-build command. This enables you to deploy multiple ASR pipelines concurrently and select which one to use at runtime.

Training Language Models

Introducing a language model to an ASR pipeline is an easy way to improve accuracy for natural language and can be fine-tuned for niche settings. In short, an n-gram language model estimates the probability distribution over groups of n or less consecutive words, P (word-1, …, word-n). By altering or biasing the data on which a language model is trained on, and thus the distribution it is estimating, it can be used to predict different transcriptions as more likely, and thus alter the prediction without changing the acoustic model.

KenLM Setup

KenLM is the recommended tool for building language models. This toolkit supports estimating, filtering and querying n-gram language models. To begin, first make sure you have Boost and zlib installed. Depending on your requirements, you may require additional dependencies. Double check by referencing the dependencies list.

After all dependencies are met, create a separate directory to build KenLM.

wget -O - https://kheafield.com/code/kenlm.tar.gz | tar xz
mkdir kenlm/build
cd kenlm/build
cmake ..
make -j2

Estimating

The next step is to gather and process data. In most cases, KenLM expects data to be natural language (suiting your use case). Common preprocessing steps include replacing numerics and removing umlauts, punctuation or special characters. However, it is most important that your preprocessing steps are consistent between both your language and acoustic model.

Assuming your current working directory is the build subdirectory of KenLM, bin/lmplz performs estimation on the corpus provided through stdin and writes the ARPA (a human readable from of the language model) to stdout. Running bin/lmplz documents the command-line arguments, however, here are a few important ones:

  • -o: Required. The order of the language model. Depends on use case, but generally 3 to 8.

  • -S: Memory to use. Nubmer followed by % for percentage, b for bytes, K for kilobytes, and so on. Default is 80%.

  • -T: Temproary file location

  • --text arg: Read text from a file instead of stdin.

  • --arpa arg: Write ARPA to a file instead of stdout.

  • --prune arg: Prune n-grams with count less than or equal to the given threshold, with one value specified for each order. For example, to prune singleton trigrams, --prune 0 0 1. The sequence of values must be non-decreasing and the last value applies to all remaining orders. Default is to not prune. Unigram pruning is not supported, so the first number must be 0.

  • --limit_vocab_file arg: Read allowed vocabulary separated by whitespace from file in argument and prune all n-grams containing vocabulary items not from the list. Can be combined with pruning.

Pruning and limiting vocabulary help get rid of typos, uncommon words, and general outliers from the dataset, making the resulting ARPA smaller and generally less overfit, but potentially at the cost of losing some jargon or colloquial language.

With the appropriate options, the language model can be estimated.

bin/lmplz -o 4 < text > text.arpa

Querying and Evaluation

For faster loading, convert the ARPA file to binary.

bin/build_binary text.arpa text.binary

The binary or ARPA can be queried via the command-line.

bin/query text.binary < data

Pretrained Models

Deployment with CitriNet is currently recommended for most users. QuartzNet 1.2 is a smaller, more efficient model and is suitable for situations where reduced accuracy is acceptable in favor of higher throughput and lower latency.

Task

Architecture

Language

Dataset

Sampling Rate

Compatibility with TLT 3.0

Compatibility with Nemo 1.0.0b4

Link

Transcription

CitriNet

English

ASR Set 1.7 - ~12000 Hours

16000

Yes

Yes

EJRVS

Transcription

Jasper

English

ASR Set 1.2 with Noisy (profiles: room reverb, echo, wind, keyboard, baby crying) - 7K hours

16000

Yes

Yes

EJRVS

Transcription

QuartzNet

English

ASR Set 1.2

16000

Yes

Yes

EJRVS