Pipeline Configuration#

In the simplest use case, you can deploy an ASR pipeline to be used with the StreamingRecognize API call (refer to riva/proto/riva_asr.proto) without any language model as follows:

riva-build speech_recognition \
    /servicemaker-dev/<rmir_filename>:<encryption_key>  \
    /servicemaker-dev/<riva_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --wfst_tokenizer_model=<wfst_tokenizer_model> \
    --wfst_verbalizer_model=<wfst_verbalizer_model> \


  • <rmir_filename> is the Riva rmir file that is generated

  • <riva_filename> is the name of the riva file to use as input

  • <encryption_key> is the key used to encrypt the files. The encryption key for the pre-trained Riva models uploaded on NGC is tlt_encode.

  • <name>,<acoustic_model_name> and <featurizer_name> are optional user-defined names for the components in the model repository.

  • <wfst_tokenizer_model> is the name of the WFST tokenizer model file to use for inverse text normalization of ASR transcripts. Refer to inverse-text-normalization for more details.

  • <wfst_verbalizer_model> is the name of the WFST verbalizer model file to use for inverse text normalization of ASR transcripts. Refer to inverse-text-normalization for more details.

  • decoder_type is the type of decoder to use. Valid values are flashlight, os2s, greedy. We recommend using flashlight. Refer to Decoder Hyper-Parameters for more details.

Upon successful completion of this command, a file named <rmir_filename> is created in the /servicemaker-dev/ folder. Since no language model is specified, the Riva greedy decoder is used to predict the transcript based on the output of the acoustic model. If your .riva archives are encrypted you need to include :<encryption_key> at the end of the RMIR filename and Riva filename. Otherwise, this is unnecessary.

For embedded platforms, using a batch size of 1 is recommended since it achieves the lowest memory footprint. To use a batch size of 1, refer to the riva-build-optional-parameters section and set the various min_batch_size, max_batch_size, opt_batch_size, and max_execution_batch_size parameters to 1 while executing the riva-build command. The Citrinet-256 model is only built for embedded platforms.

The following summary lists the riva-build commands used to generate the RMIR files from the Quick Start scripts for different models, modes, and their limitations:

riva-build speech_recognition \
  <rmir_filename>:<key> \
  <riva_file>:<key> \
  --name=conformer-en-US-asr-streaming \
  --return_separate_utterances=False \
  --featurizer.use_utterance_norm_params=False \
  --featurizer.precalc_norm_time_steps=0 \
  --featurizer.precalc_norm_params=False \
  --ms_per_timestep=40 \
  --endpointing.start_history=200 \
  --nn.fp16_needs_obey_precision_pass \
  --endpointing.residue_blanks_at_start=-2 \
  --chunk_size=0.16 \
  --left_padding_size=1.92 \
  --right_padding_size=1.92 \
  --decoder_type=flashlight \
  --flashlight_decoder.asr_model_delay=-1 \
  --decoding_language_model_binary=<bin_file> \
  --decoding_vocab=<txt_decoding_vocab_file> \
  --flashlight_decoder.lm_weight=0.8 \
  --flashlight_decoder.word_insertion_score=1.0 \
  --flashlight_decoder.beam_size=32 \
  --flashlight_decoder.beam_threshold=20. \
  --flashlight_decoder.num_tokenization=1 \
  --profane_words_file=<txt_profane_words_file> \
  --language_code=en-US \
  --wfst_tokenizer_model=<far_tokenizer_file> \
  --wfst_verbalizer_model=<far_verbalizer_file> \

For details about the parameters passed to riva-build to customize the ASR pipeline, run:

riva-build <pipeline> -h


For information about deploying the now deprecated Jasper or QuartzNet models in Riva, refer to the Riva ASR Pipeline Configuration section.

Streaming/Offline Recognition#

The Riva ASR pipeline can be configured for both streaming and offline recognition use cases. When using the StreamingRecognize API call (refer to riva/proto/riva_asr.proto), we recommend the following riva-build parameters for low-latency streaming recognition with the Conformer acoustic model:

riva-build speech_recognition \
    /servicemaker-dev/<rmir_filename>:<encryption_key> \
    /servicemaker-dev/<riva_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --wfst_tokenizer_model=<wfst_tokenizer_model> \
    --wfst_verbalizer_model=<wfst_verbalizer_model> \
    --decoder_type=greedy \
    --chunk_size=0.16 \
    --padding_size=1.92 \
    --ms_per_timestep=40 \
    --nn.fp16_needs_obey_precision_pass \
    --greedy_decoder.asr_model_delay=-1 \
    --endpointing.residue_blanks_at_start=-2 \
    --featurizer.use_utterance_norm_params=False \
    --featurizer.precalc_norm_time_steps=0 \

For high throughput streaming recognition with the StreamingRecognize API call, chunk_size and padding_size can be set as follows:

    --chunk_size=0.8 \

Finally, to configure the ASR pipeline for offline recognition with the Recognize API call (refer to riva/proto/riva_asr.proto), we recommend the following settings with the Conformer acoustic model:

     --offline \
     --chunk_size=4.8 \

The recommended riva-build command to use for other acoustic models such as Citrinet can be found in the table in section Pipeline Configuration.


When deploying the offline ASR models with riva-deploy, TensorRT warnings indicating that memory requirements of format conversion cannot be satisfied might appear in the logs. These warnings should not affect functionality and can be ignored.

Language Models#

Riva ASR supports decoding with an n-gram language model. The n-gram language model can be provided in a few different ways.

  1. A .arpa format file.

  2. A KenLM binary format file.

For more information on building language models, refer to the training-language-models section.

ARPA Format Language Model#

To configure the Riva ASR pipeline to use an n-gram language model stored in arpa format, replace:



    --decoder_type=flashlight \
    --decoding_language_model_arpa=<arpa_filename> \

KenLM Binary Language Model#

To generate the Riva RMIR file when using a KenLM binary file to specify the language model, replace:



    --decoder_type=flashlight \
    --decoding_language_model_binary=<KENLM_binary_filename> \

Decoder Hyper-Parameters#

The decoder language model hyper-parameters can also be specified from the riva-build command.

You can specify the Flashlight decoder hyper-parameters beam_size, beam_size_token, beam_threshold, lm_weight and word_insertion_score by specifying

    --decoder_type=flashlight \
    --decoding_language_model_binary=<arpa_filename> \
    --decoding_vocab=<decoder_vocab_file> \
    --flashlight_decoder.beam_size=<beam_size> \
    --flashlight_decoder.beam_size_token=<beam_size_token> \
    --flashlight_decoder.beam_threshold=<beam_threshold> \
    --flashlight_decoder.lm_weight=<lm_weight> \


  • beam_size is the maximum number of hypothesis the decoder holds at each step

  • beam_size_token is the maximum number of tokens the decoder considers at each step

  • beam_threshold is the threshold to prune hypothesis

  • lm_weight is the weight of the language model used when scoring hypothesis

  • word_insertion_score is the word insertion score used when scoring hypothesis

For advanced users, additional decoder hyper-parameters can also be specified. Refer to Riva-build Optional Parameters for a list of those parameters and their description.

Flashlight Decoder Lexicon#

The Flashlight decoder used in Riva is a lexicon-based decoder and only emits words that are present in the decoder vocabulary file passed to the riva-build command. The decoder vocabulary file used to generate the ASR pipelines in the Quick Start scripts include words that cover a wide range of domains and should provide accurate transcripts for most applications.

It is also possible to build an ASR pipeline using your own decoder vocabulary file by using the parameter --decoding_vocab of the riva-build command. For example, you could start with the riva-build commands used to generate the ASR pipelines in our Quick Start scripts from section Pipeline Configuration and provide your own lexicon decoder vocabulary file. You will need to ensure that words of interest are in the decoder vocabulary file. The Riva ServiceMaker automatically tokenizes the words in the decoder vocabulary file. The number of tokenization for each word in the decoder vocabulary file can be controlled with the --flashlight_decoder.num_tokenization parameter.

(Advanced) Manually Adding Additional Tokenizations of Words in Lexicon#

It is also possible to manually add additional tokenizations for the words in the decoder vocabulary by performing the following steps:

The riva-build and riva-deploy commands provided in the previous section store the lexicon in the /data/models/citrinet-1024-en-US-asr-streaming-ctc-decoder-cpu-streaming/1/lexicon.txt file of the Triton model repository.

To add additional tokenizations to the lexicon, copy the lexicon file:

cp /data/models/citrinet-1024-en-US-asr-streaming-ctc-decoder-cpu-streaming/1/lexicon.txt decoding_lexicon.txt

and add the SentencePiece tokenization for the word of interest. For example, you could add:

manu ▁ma n u
manu ▁man n n ew
manu ▁man n ew

to the decoding_lexicon.txt file so that the word manu is generated in the transcript if the acoustic model predicts those tokens. You will need to ensure that the new lines follow the indentation/space pattern like the rest of the file and that the tokens used are part of the tokenizer model. After this is done, regenerate the model repository using the new decoding lexicon by passing --decoding_lexicon=decoding_lexicon.txt to riva-build instead of --decoding_vocab=decoding_vocab.txt.

Flashlight Decoder Lexicon Free#

The Flashlight decoder can also be used without a lexicon. Lexicon free decoding is performed with a character based language model. Lexicon free decoding with flashlight can be enabled by adding --flashlight_decoder.use_lexicon_free_decoding=True to riva-build and specifying a character based language model via --decoding_language_model_binary=<path/to/charlm>.

OpenSeq2Seq Decoder#

Riva uses the OpenSeq2Seq decoder for beam-search decoding with a language model. For example:

riva-build speech_recognition \
   <rmir_filename>:<key> <riva_filename>:<key> \
   --name=citrinet-1024-zh-CN-asr-streaming \
   --ms_per_timestep=80 \
   --featurizer.use_utterance_norm_params=False \
   --featurizer.precalc_norm_time_steps=0 \
   --featurizer.precalc_norm_params=False \
   --endpointing.residue_blanks_at_start=-2 \
   --chunk_size=0.16 \
   --left_padding_size=1.92 \
   --right_padding_size=1.92 \
   --decoder_type=os2s \
   --os2s_decoder.language_model_alpha=0.5 \
   --os2s_decoder.language_model_beta=1.0 \
   --os2s_decoder.beam_search_width=128 \


  • --os2s_decoder.language_model_alpha is the weight given to the language model during the beam search.

  • --os2s_decoder.language_model_beta is the word insertion score.

  • --os2s_decoder.beam_search_width is the number of partial hypotheses to keep at each step of the beam search.

All of these parameters effect performance. Latency increases as these parameters increase in value. The suggested ranges are listed below.













Beginning/End of Utterance Detection#

Riva ASR uses an algorithm that detects the beginning and end of utterances. This algorithm is used to reset the ASR decoder state, and to trigger a call to the punctuator model. By default, the beginning of an utterance is flagged when 20% of the frames in a 300ms window has nonblank characters. The end of an utterance is flagged when 98% of the frames in a 800ms window are blank characters. You can tune those values for their particular use case by using the following riva-build parameters:

  --endpointing.start_history=300 \
  --endpointing.start_th=0.2 \
  --endpointing.stop_history=800 \

Additionally, it is possible to disable the beginning/end of utterance detection by passing --endpointing_type=none to riva-build.

Note that in this case, the decoder state resets after the full audio signal has been sent by the client. Similarly, the punctuator model is only called once.

Neural-Based Voice Activity Detection#

It is possible to use a neural-based Voice Activity Detection (VAD) algorithm in Riva ASR. This can help to filter out noise in the audio, and can help reduce spurious words from appearing in the ASR transcripts. To use the neural-based VAD algorithm in the ASR pipeline, pass the following additional parameters to riva-build:



  • <neural_vad_riva_filename> is the .riva neural VAD model to use. For example, you can use the MarbleNet VAD Riva model available on NGC.

  • <encryption_key> is the key used to encrypt the file. The encryption key for the pre-trained Riva models uploaded on NGC is tlt_encode.

Note that using a neural VAD component in the ASR pipeline will have an impact on latency and throughput of the deployed Riva ASR server.

Generating Multiple Transcript Hypotheses#

By default, the Riva ASR pipeline is configured to only generate the best transcript hypothesis for each utterance. It is possible to generate multiple transcript hypotheses by passing the parameter --max_supported_transcripts=N to the riva-build command, where N is the maximum number of hypotheses to generate. With these changes, the client application can retrieve the multiple hypotheses by setting the max_alternatives field of RecognitionConfig to values greater than 1.

Impact of Chunk Size and Padding Size on Performance and Accuracy (Advanced)#

The chunk_size and padding_size parameters used to configure Riva ASR can have a significant impact on accuracy and performance. A brief description of those parameters can be found in section Riva-build Optional Parameters. Riva provides pre-configured ASR pipelines, with preset values of chunk_size and padding_size: a low-latency streaming configuration, a high throughput streaming configuration, and an offline configuration. Those configurations should suit most deployment scenarios. The chunk_size and padding_size values used for those configurations can be found in a table in section Pipeline Configuration.

The chunk_size parameter is the duration of the audio chunk in seconds processed by the Riva server for every streaming request. Hence, in streaming mode, Riva returns one response for every chunk_size seconds of audio. A lower value of chunk_size will therefore reduce the user-perceived latency as the transcript will get updated more frequently.

The padding_size parameter is the duration in seconds of the padding prepended and appended to the chunk_size. The Riva acoustic model processes an input tensor corresponding to an audio duration of 2*(padding_size) + chunk_size for every new chunk of audio it receives. Increasing padding_size or chunk_size typically helps to improve accuracy of the transcripts since the acoustic model has access to more context. However, increasing padding_size reduces the maximum number of concurrent streams supported by Riva ASR, since it will increase the size of the input tensor fed to the acoustic model for every new chunk.

Sharing Acoustic and Feature Extractor Models Across Multiple ASR Pipelines (Advanced)#

It is possible to configure the Riva ASR service such that multiple ASR pipelines share the same feature extractor and acoustic models, thus allowing to reduce GPU memory usage. This option can be used, for example, to deploy multiple ASR pipelines where each pipeline uses a different language model, but share the same acoustic model and feature extractor. This can be achieved by specifying the parameters acoustic_model_name and featurizer_name in the riva-build command:

riva-build speech_recognition \
    /servicemaker-dev/<rmir_filename>:<encryption_key>  \
    /servicemaker-dev/<riva_filename>:<encryption_key> \
    --name=<pipeline_name> \
    --acoustic_model_name=<acoustic_model_name> \
    --featurizer_name=<featurizer_name> \
    --wfst_tokenizer_model=<wfst_tokenizer_model> \
    --wfst_verbalizer_model=<wfst_verbalizer_model> \


  • <acoustic_model_name> is the user-defined name for the acoustic model component of the ASR pipeline

  • <featurizer_name> is the user-defined name for the feature extractor component of the ASR pipeline

If multiple ASR pipelines are built, each with a different name, but with the same acoustic_model_name and featurizer_name, they will share the same acoustic and feature extractor models.

When running the riva-deploy command, you must pass the -f option to ensure that all the ASR pipelines that share the acoustic model and feature extractor are initialized properly.


<acoustic_model_name> and <featurizer_name> are global and can conflict across model pipelines. Override this only in cases when you know what other models will be deployed and you want to share the featurizer and/or acoustic models across different ASR pipelines. When specifying <acoustic_model_name> you should make sure that there will not be any incompatibilities in acoustic model weights or input shapes. Similarly, when specifying <featurizer_name>, you should make sure that that all ASR pipelines with the same <featurizer_name> use the same feature extractor parameters.

Riva-build Optional Parameters#

For details about the parameters passed to riva-build to customize the ASR pipeline, issue:

riva-build speech_recognition -h

The following list includes descriptions for all optional parameters currently recognized by riva-build:

usage: riva-build speech_recognition [-h] [-f] [-v]
                                     [--language_code LANGUAGE_CODE]
                                     [--max_batch_size MAX_BATCH_SIZE]
                                     [--acoustic_model_name ACOUSTIC_MODEL_NAME]
                                     [--featurizer_name FEATURIZER_NAME]
                                     [--name NAME] [--streaming STREAMING]
                                     [--offline] [--vad_type VAD_TYPE]
                                     [--endpointing_type ENDPOINTING_TYPE]
                                     [--chunk_size CHUNK_SIZE]
                                     [--padding_factor PADDING_FACTOR]
                                     [--left_padding_size LEFT_PADDING_SIZE]
                                     [--right_padding_size RIGHT_PADDING_SIZE]
                                     [--padding_size PADDING_SIZE]
                                     [--max_supported_transcripts MAX_SUPPORTED_TRANSCRIPTS]
                                     [--ms_per_timestep MS_PER_TIMESTEP]
                                     [--force_decoder_reset_after_ms FORCE_DECODER_RESET_AFTER_MS]
                                     [--lattice_beam LATTICE_BEAM]
                                     [--decoding_language_model_arpa DECODING_LANGUAGE_MODEL_ARPA]
                                     [--decoding_language_model_binary DECODING_LANGUAGE_MODEL_BINARY]
                                     [--decoding_language_model_fst DECODING_LANGUAGE_MODEL_FST]
                                     [--decoding_language_model_words DECODING_LANGUAGE_MODEL_WORDS]
                                     [--rescoring_language_model_arpa RESCORING_LANGUAGE_MODEL_ARPA]
                                     [--decoding_language_model_carpa DECODING_LANGUAGE_MODEL_CARPA]
                                     [--rescoring_language_model_carpa RESCORING_LANGUAGE_MODEL_CARPA]
                                     [--decoding_lexicon DECODING_LEXICON]
                                     [--decoding_vocab DECODING_VOCAB]
                                     [--tokenizer_model TOKENIZER_MODEL]
                                     [--decoder_type DECODER_TYPE]
                                     [--stddev_floor STDDEV_FLOOR]
                                     [--wfst_tokenizer_model WFST_TOKENIZER_MODEL]
                                     [--wfst_verbalizer_model WFST_VERBALIZER_MODEL]
                                     [--speech_hints_model SPEECH_HINTS_MODEL]
                                     [--buffer_look_ahead BUFFER_LOOK_AHEAD]
                                     [--buffer_context_history BUFFER_CONTEXT_HISTORY]
                                     [--buffer_threshold BUFFER_THRESHOLD]
                                     [--buffer_max_timeout_frames BUFFER_MAX_TIMEOUT_FRAMES]
                                     [--profane_words_file PROFANE_WORDS_FILE]
                                     [--append_space_to_transcripts APPEND_SPACE_TO_TRANSCRIPTS]
                                     [--return_separate_utterances RETURN_SEPARATE_UTTERANCES]
                                     [--featurizer.max_sequence_idle_microseconds FEATURIZER.MAX_SEQUENCE_IDLE_MICROSECONDS]
                                     [--featurizer.max_batch_size FEATURIZER.MAX_BATCH_SIZE]
                                     [--featurizer.min_batch_size FEATURIZER.MIN_BATCH_SIZE]
                                     [--featurizer.opt_batch_size FEATURIZER.OPT_BATCH_SIZE]
                                     [--featurizer.preferred_batch_size FEATURIZER.PREFERRED_BATCH_SIZE]
                                     [--featurizer.batching_type FEATURIZER.BATCHING_TYPE]
                                     [--featurizer.preserve_ordering FEATURIZER.PRESERVE_ORDERING]
                                     [--featurizer.instance_group_count FEATURIZER.INSTANCE_GROUP_COUNT]
                                     [--featurizer.max_queue_delay_microseconds FEATURIZER.MAX_QUEUE_DELAY_MICROSECONDS]
                                     [--featurizer.optimization_graph_level FEATURIZER.OPTIMIZATION_GRAPH_LEVEL]
                                     [--featurizer.max_execution_batch_size FEATURIZER.MAX_EXECUTION_BATCH_SIZE]
                                     [--featurizer.gain FEATURIZER.GAIN]
                                     [--featurizer.dither FEATURIZER.DITHER]
                                     [--featurizer.use_utterance_norm_params FEATURIZER.USE_UTTERANCE_NORM_PARAMS]
                                     [--featurizer.precalc_norm_time_steps FEATURIZER.PRECALC_NORM_TIME_STEPS]
                                     [--featurizer.precalc_norm_params FEATURIZER.PRECALC_NORM_PARAMS]
                                     [--featurizer.norm_per_feature FEATURIZER.NORM_PER_FEATURE]
                                     [--featurizer.mean FEATURIZER.MEAN]
                                     [--featurizer.stddev FEATURIZER.STDDEV]
                                     [--featurizer.transpose FEATURIZER.TRANSPOSE]
                                     [--featurizer.padding_size FEATURIZER.PADDING_SIZE]
                                     [--nn.max_sequence_idle_microseconds NN.MAX_SEQUENCE_IDLE_MICROSECONDS]
                                     [--nn.max_batch_size NN.MAX_BATCH_SIZE]
                                     [--nn.min_batch_size NN.MIN_BATCH_SIZE]
                                     [--nn.opt_batch_size NN.OPT_BATCH_SIZE]
                                     [--nn.preferred_batch_size NN.PREFERRED_BATCH_SIZE]
                                     [--nn.batching_type NN.BATCHING_TYPE]
                                     [--nn.preserve_ordering NN.PRESERVE_ORDERING]
                                     [--nn.instance_group_count NN.INSTANCE_GROUP_COUNT]
                                     [--nn.max_queue_delay_microseconds NN.MAX_QUEUE_DELAY_MICROSECONDS]
                                     [--nn.optimization_graph_level NN.OPTIMIZATION_GRAPH_LEVEL]
                                     [--nn.trt_max_workspace_size NN.TRT_MAX_WORKSPACE_SIZE]
                                     [--endpointing.max_sequence_idle_microseconds ENDPOINTING.MAX_SEQUENCE_IDLE_MICROSECONDS]
                                     [--endpointing.max_batch_size ENDPOINTING.MAX_BATCH_SIZE]
                                     [--endpointing.min_batch_size ENDPOINTING.MIN_BATCH_SIZE]
                                     [--endpointing.opt_batch_size ENDPOINTING.OPT_BATCH_SIZE]
                                     [--endpointing.preferred_batch_size ENDPOINTING.PREFERRED_BATCH_SIZE]
                                     [--endpointing.batching_type ENDPOINTING.BATCHING_TYPE]
                                     [--endpointing.preserve_ordering ENDPOINTING.PRESERVE_ORDERING]
                                     [--endpointing.instance_group_count ENDPOINTING.INSTANCE_GROUP_COUNT]
                                     [--endpointing.max_queue_delay_microseconds ENDPOINTING.MAX_QUEUE_DELAY_MICROSECONDS]
                                     [--endpointing.optimization_graph_level ENDPOINTING.OPTIMIZATION_GRAPH_LEVEL]
                                     [--endpointing.ms_per_timestep ENDPOINTING.MS_PER_TIMESTEP]
                                     [--endpointing.start_history ENDPOINTING.START_HISTORY]
                                     [--endpointing.stop_history ENDPOINTING.STOP_HISTORY]
                                     [--endpointing.start_th ENDPOINTING.START_TH]
                                     [--endpointing.stop_th ENDPOINTING.STOP_TH]
                                     [--endpointing.residue_blanks_at_start ENDPOINTING.RESIDUE_BLANKS_AT_START]
                                     [--endpointing.residue_blanks_at_end ENDPOINTING.RESIDUE_BLANKS_AT_END]
                                     [--endpointing.vocab_file ENDPOINTING.VOCAB_FILE]
                                     [--neural_vad.max_sequence_idle_microseconds NEURAL_VAD.MAX_SEQUENCE_IDLE_MICROSECONDS]
                                     [--neural_vad.max_batch_size NEURAL_VAD.MAX_BATCH_SIZE]
                                     [--neural_vad.min_batch_size NEURAL_VAD.MIN_BATCH_SIZE]
                                     [--neural_vad.opt_batch_size NEURAL_VAD.OPT_BATCH_SIZE]
                                     [--neural_vad.preferred_batch_size NEURAL_VAD.PREFERRED_BATCH_SIZE]
                                     [--neural_vad.batching_type NEURAL_VAD.BATCHING_TYPE]
                                     [--neural_vad.preserve_ordering NEURAL_VAD.PRESERVE_ORDERING]
                                     [--neural_vad.instance_group_count NEURAL_VAD.INSTANCE_GROUP_COUNT]
                                     [--neural_vad.max_queue_delay_microseconds NEURAL_VAD.MAX_QUEUE_DELAY_MICROSECONDS]
                                     [--neural_vad.optimization_graph_level NEURAL_VAD.OPTIMIZATION_GRAPH_LEVEL]
                                     [--neural_vad.load_model NEURAL_VAD.LOAD_MODEL]
                                     [--neural_vad.batch_mode NEURAL_VAD.BATCH_MODE]
                                     [--neural_vad.decoupled_mode NEURAL_VAD.DECOUPLED_MODE]
                                     [--neural_vad.onset NEURAL_VAD.ONSET]
                                     [--neural_vad.offset NEURAL_VAD.OFFSET]
                                     [--neural_vad.pad_onset NEURAL_VAD.PAD_ONSET]
                                     [--neural_vad.pad_offset NEURAL_VAD.PAD_OFFSET]
                                     [--neural_vad.min_duration_on NEURAL_VAD.MIN_DURATION_ON]
                                     [--neural_vad.min_duration_off NEURAL_VAD.MIN_DURATION_OFF]
                                     [--neural_vad.filter_speech_first NEURAL_VAD.FILTER_SPEECH_FIRST]
                                     [--neural_vad.features_mask_value NEURAL_VAD.FEATURES_MASK_VALUE]
                                     [--neural_vad_nn.max_sequence_idle_microseconds NEURAL_VAD_NN.MAX_SEQUENCE_IDLE_MICROSECONDS]
                                     [--neural_vad_nn.max_batch_size NEURAL_VAD_NN.MAX_BATCH_SIZE]
                                     [--neural_vad_nn.min_batch_size NEURAL_VAD_NN.MIN_BATCH_SIZE]
                                     [--neural_vad_nn.opt_batch_size NEURAL_VAD_NN.OPT_BATCH_SIZE]
                                     [--neural_vad_nn.preferred_batch_size NEURAL_VAD_NN.PREFERRED_BATCH_SIZE]
                                     [--neural_vad_nn.batching_type NEURAL_VAD_NN.BATCHING_TYPE]
                                     [--neural_vad_nn.preserve_ordering NEURAL_VAD_NN.PRESERVE_ORDERING]
                                     [--neural_vad_nn.instance_group_count NEURAL_VAD_NN.INSTANCE_GROUP_COUNT]
                                     [--neural_vad_nn.max_queue_delay_microseconds NEURAL_VAD_NN.MAX_QUEUE_DELAY_MICROSECONDS]
                                     [--neural_vad_nn.optimization_graph_level NEURAL_VAD_NN.OPTIMIZATION_GRAPH_LEVEL]
                                     [--neural_vad_nn.trt_max_workspace_size NEURAL_VAD_NN.TRT_MAX_WORKSPACE_SIZE]
                                     [--neural_vad_nn.min_seq_len NEURAL_VAD_NN.MIN_SEQ_LEN]
                                     [--neural_vad_nn.opt_seq_len NEURAL_VAD_NN.OPT_SEQ_LEN]