Pipeline Configuration#

In the simplest use case, you can deploy an ASR pipeline to be used with the StreamingRecognize API call (refer to riva/proto/riva_asr.proto) without any language model as follows:

riva-build speech_recognition \
    /servicemaker-dev/<rmir_filename>:<encryption_key>  \
    /servicemaker-dev/<riva_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --wfst_tokenizer_model=<wfst_tokenizer_model> \
    --wfst_verbalizer_model=<wfst_verbalizer_model> \
    --decoder_type=greedy

where:

  • <rmir_filename> is the Riva rmir file that is generated

  • <riva_filename> is the name of the riva file to use as input

  • <encryption_key> is the key used to encrypt the files. The encryption key for the pre-trained Riva models uploaded on NGC is tlt_encode.

  • <name>,<acoustic_model_name> and <featurizer_name> are optional user-defined names for the components in the model repository.

  • <wfst_tokenizer_model> is the name of the WFST tokenizer model file to use for inverse text normalization of ASR transcripts. Refer to inverse-text-normalization for more details.

  • <wfst_verbalizer_model> is the name of the WFST verbalizer model file to use for inverse text normalization of ASR transcripts. Refer to inverse-text-normalization for more details.

  • decoder_type is the type of decoder to use. Valid values are flashlight, os2s, greedy. We recommend using flashlight. Refer to Decoder Hyper-Parameters for more details.

Upon successful completion of this command, a file named <rmir_filename> is created in the /servicemaker-dev/ folder. Since no language model is specified, the Riva greedy decoder is used to predict the transcript based on the output of the acoustic model. If your .riva archives are encrypted you need to include :<encryption_key> at the end of the RMIR filename and Riva filename. Otherwise, this is unnecessary.

For embedded platforms, using a batch size of 1 is recommended since it achieves the lowest memory footprint. To use a batch size of 1, refer to the riva-build-optional-parameters section and set the various min_batch_size, max_batch_size, opt_batch_size, and max_execution_batch_size parameters to 1 while executing the riva-build command. The Citrinet-256 model is only built for embedded platforms.

The following summary lists the riva-build commands used to generate the RMIR files from the Quick Start scripts for different models, modes, and their limitations:

riva-build speech_recognition \
  <rmir_filename>:<key> \
  <riva_file>:<key> \
  --name=conformer-en-US-asr-streaming \
  --return_separate_utterances=False \
  --featurizer.use_utterance_norm_params=False \
  --featurizer.precalc_norm_time_steps=0 \
  --featurizer.precalc_norm_params=False \
  --ms_per_timestep=40 \
  --endpointing.start_history=200 \
  --nn.fp16_needs_obey_precision_pass \
  --endpointing.residue_blanks_at_start=-2 \
  --chunk_size=0.16 \
  --left_padding_size=1.92 \
  --right_padding_size=1.92 \
  --decoder_type=flashlight \
  --flashlight_decoder.asr_model_delay=-1 \
  --decoding_language_model_binary=<bin_file> \
  --decoding_vocab=<txt_decoding_vocab_file> \
  --flashlight_decoder.lm_weight=0.8 \
  --flashlight_decoder.word_insertion_score=1.0 \
  --flashlight_decoder.beam_size=32 \
  --flashlight_decoder.beam_threshold=20. \
  --flashlight_decoder.num_tokenization=1 \
  --profane_words_file=<txt_profane_words_file> \
  --language_code=en-US \
  --wfst_tokenizer_model=<far_tokenizer_file> \
  --wfst_verbalizer_model=<far_verbalizer_file> \
  --speech_hints_model=<far_speech_hints_file>

For details about the parameters passed to riva-build to customize the ASR pipeline, run:

riva-build <pipeline> -h

Note

For information about deploying the now deprecated Jasper or QuartzNet models in Riva, refer to the Riva ASR Pipeline Configuration section.

Streaming/Offline Recognition#

The Riva ASR pipeline can be configured for both streaming and offline recognition use cases. When using the StreamingRecognize API call (refer to riva/proto/riva_asr.proto), we recommend the following riva-build parameters for low-latency streaming recognition with the Conformer acoustic model:

riva-build speech_recognition \
    /servicemaker-dev/<rmir_filename>:<encryption_key> \
    /servicemaker-dev/<riva_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --wfst_tokenizer_model=<wfst_tokenizer_model> \
    --wfst_verbalizer_model=<wfst_verbalizer_model> \
    --decoder_type=greedy \
    --chunk_size=0.16 \
    --padding_size=1.92 \
    --ms_per_timestep=40 \
    --nn.fp16_needs_obey_precision_pass \
    --greedy_decoder.asr_model_delay=-1 \
    --endpointing.residue_blanks_at_start=-2 \
    --featurizer.use_utterance_norm_params=False \
    --featurizer.precalc_norm_time_steps=0 \
    --featurizer.precalc_norm_params=False

For high throughput streaming recognition with the StreamingRecognize API call, chunk_size and padding_size can be set as follows:

    --chunk_size=0.8 \
    --padding_size=1.6

Finally, to configure the ASR pipeline for offline recognition with the Recognize API call (refer to riva/proto/riva_asr.proto), we recommend the following settings with the Conformer acoustic model:

     --offline \
     --chunk_size=4.8 \
     --padding_size=1.6

The recommended riva-build command to use for other acoustic models such as Citrinet can be found in the table in section Pipeline Configuration.

Note

When deploying the offline ASR models with riva-deploy, TensorRT warnings indicating that memory requirements of format conversion cannot be satisfied might appear in the logs. These warnings should not affect functionality and can be ignored.

Language Models#

Riva ASR supports decoding with an n-gram language model. The n-gram language model can be provided in a few different ways.

  1. A .arpa format file.

  2. A KenLM binary format file.

For more information on building language models, refer to the training-language-models section.

ARPA Format Language Model#

To configure the Riva ASR pipeline to use an n-gram language model stored in arpa format, replace:

    --decoder_type=greedy

with

    --decoder_type=flashlight \
    --decoding_language_model_arpa=<arpa_filename> \
    --decoding_vocab=<decoder_vocab_file>

KenLM Binary Language Model#

To generate the Riva RMIR file when using a KenLM binary file to specify the language model, replace:

    --decoder_type=greedy

with

    --decoder_type=flashlight \
    --decoding_language_model_binary=<KENLM_binary_filename> \
    --decoding_vocab=<decoder_vocab_file>

Decoder Hyper-Parameters#

The decoder language model hyper-parameters can also be specified from the riva-build command.

You can specify the Flashlight decoder hyper-parameters beam_size, beam_size_token, beam_threshold, lm_weight and word_insertion_score by specifying

    --decoder_type=flashlight \
    --decoding_language_model_binary=<arpa_filename> \
    --decoding_vocab=<decoder_vocab_file> \
    --flashlight_decoder.beam_size=<beam_size> \
    --flashlight_decoder.beam_size_token=<beam_size_token> \
    --flashlight_decoder.beam_threshold=<beam_threshold> \
    --flashlight_decoder.lm_weight=<lm_weight> \
    --flashlight_decoder.word_insertion_score=<word_insertion_score>

Where:

  • beam_size is the maximum number of hypothesis the decoder holds at each step

  • beam_size_token is the maximum number of tokens the decoder considers at each step

  • beam_threshold is the threshold to prune hypothesis

  • lm_weight is the weight of the language model used when scoring hypothesis

  • word_insertion_score is the word insertion score used when scoring hypothesis

For advanced users, additional decoder hyper-parameters can also be specified. Refer to Riva-build Optional Parameters for a list of those parameters and their description.

Flashlight Decoder Lexicon#

The Flashlight decoder used in Riva is a lexicon-based decoder and only emits words that are present in the decoder vocabulary file passed to the riva-build command. The decoder vocabulary file used to generate the ASR pipelines in the Quick Start scripts include words that cover a wide range of domains and should provide accurate transcripts for most applications.

It is also possible to build an ASR pipeline using your own decoder vocabulary file by using the parameter