 class nemo.collections.asr.models.EncDecCTCModel(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.models.asr_model.ASRModel
,nemo.collections.asr.models.asr_model.ExportableEncDecModel
,nemo.collections.asr.parts.mixins.mixins.ASRModuleMixin
,nemo.collections.asr.parts.mixins.interctc_mixin.InterCTCMixin
,nemo.collections.asr.parts.mixins.transcription.ASRTranscriptionMixin
Base class for encoder decoder CTCbased models.
 change_vocabulary(new_vocabulary: List[str], decoding_cfg: Optional[omegaconf.DictConfig] = None)
Changes vocabulary used during CTC decoding process. Use this method when finetuning on from pretrained model. This method changes only decoder and leaves encoder and preprocessing modules unchanged. For example, you would use it if you want to use pretrained encoder when finetuning on a data in another language, or when you’d need model to learn capitalization, punctuation and/or special characters.
If new_vocabulary == self.decoder.vocabulary then nothing will be changed.
 Parameters
new_vocabulary – list with new vocabulary. Must contain at least 2 elements. Typically, this is target alphabet.
Returns: None
 register_artifact(config_path: str, src: str, verify_src_exists: bool = True)
Register model artifacts with this function. These artifacts (files) will be included inside .nemo file when model.save_to(“mymodel.nemo”) is called.
How it works:
 It always returns existing absolute path which can be used during Model constructor call
EXCEPTION: src is None or “” in which case nothing will be done and src will be returned
It will add (config_path, model_utils.ArtifactItem()) pair to self.artifacts
If "src" is local existing path: then it will be returned in absolute path form. elif "src" starts with "nemo_file:unique_artifact_name": .nemo will be untarred to a temporary folder location and an actual existing path will be returned else: an error will be raised.
WARNING: use .register_artifact calls in your models’ constructors. The returned path is not guaranteed to exist after you have exited your model’s constructor.
 Parameters
config_path (str) – Artifact key. Usually corresponds to the model config.
src (str) – Path to artifact.
verify_src_exists (bool) – If set to False, then the artifact is optional and register_artifact will return None even if src is not found. Defaults to True.
 Returns
 Return type
If src is not None or empty it always returns absolute path which is guaranteed to exist during model instance life
str
 setup_optimization(optim_config: Optional[Union[omegaconf.DictConfig, Dict]] = None, optim_kwargs: Optional[Dict[str, Any]] = None)
Prepares an optimizer from a string name and its optional config parameters.
 Parameters
optim_config –
A dictionary containing the following keys:
”lr”: mandatory key for learning rate. Will raise ValueError if not provided.
”optimizer”: string name pointing to one of the available optimizers in the registry. If not provided, defaults to “adam”.
”opt_args”: Optional list of strings, in the format “arg_name=arg_value”. The list of “arg_value” will be parsed and a dictionary of optimizer kwargs will be built and supplied to instantiate the optimizer.
optim_kwargs – A dictionary with additional kwargs for the optimizer. Used for nonprimitive types that are not compatible with OmegaConf.
 setup_test_data(test_data_config: Optional[Union[omegaconf.DictConfig, Dict]])
Sets up the test data loader via a Dictlike object.
 Parameters
test_data_config – A config that contains the information regarding construction of an ASR Training dataset.
 Supported Datasets:
AudioToCharDALIDataset
 setup_training_data(train_data_config: Optional[Union[omegaconf.DictConfig, Dict]])
Sets up the training data loader via a Dictlike object.
 Parameters
train_data_config – A config that contains the information regarding construction of an ASR Training dataset.
 Supported Datasets:
AudioToCharDALIDataset
 setup_validation_data(val_data_config: Optional[Union[omegaconf.DictConfig, Dict]])
Sets up the validation data loader via a Dictlike object.
 Parameters
val_data_config – A config that contains the information regarding construction of an ASR Training dataset.
 Supported Datasets:
AudioToCharDALIDataset
 transcribe(audio: Union[str, List[str], torch.Tensor, numpy.ndarray, torch.utils.data.DataLoader], batch_size: int = 4, return_hypotheses: bool = False, num_workers: int = 0, channel_selector: Optional[Union[int, Iterable[int], str]] = None, augmentor: omegaconf.DictConfig = None, verbose: bool = True, override_config: Optional[nemo.collections.asr.parts.mixins.transcription.TranscribeConfig] = None) → Union[List[str], List[Hypothesis], Tuple[List[str]], Tuple[List[Hypothesis]]]
If modify this function, please remember update transcribe_partial_audio() in nemo/collections/asr/parts/utils/trancribe_utils.py
Uses greedy decoding to transcribe audio files. Use this method for debugging and prototyping.
 Parameters
audio – (a single or list) of paths to audio files or a np.ndarray audio array. Can also be a dataloader object that provides values that can be consumed by the model. Recommended length per file is between 5 and 25 seconds. But it is possible to pass a few hours long file if enough GPU memory is available.
batch_size – (int) batch size to use during inference. Bigger will result in better throughput performance but would use more memory.
return_hypotheses – (bool) Either return hypotheses or text With hypotheses can do some postprocessing like getting timestamp or rescoring
num_workers – (int) number of workers for DataLoader
channel_selector (int  Iterable[int]  str) – select a single channel or a subset of channels from multichannel audio. If set to ‘average’, it performs averaging across channels. Disabled if set to None. Defaults to None.
augmentor – (DictConfig): Augment audio samples during transcription if augmentor is applied.
verbose – (bool) whether to display tqdm progress bar
override_config – (Optional[TranscribeConfig]) override transcription config predefined by the user. Note: All other arguments in the function will be ignored if override_config is passed. You should call this argument as model.transcribe(audio, override_config=TranscribeConfig(…)).
 Returns
A list of transcriptions (or raw log probabilities if logprobs is True) in the same order as paths2audio_files
 class nemo.collections.asr.models.EncDecCTCModelBPE(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.models.ctc_models.EncDecCTCModel
,nemo.collections.asr.parts.mixins.mixins.ASRBPEMixin
Encoder decoder CTCbased models with Byte Pair Encoding.
 change_vocabulary(new_tokenizer_dir: Union[str, omegaconf.DictConfig], new_tokenizer_type: str, decoding_cfg: Optional[omegaconf.DictConfig] = None)
Changes vocabulary of the tokenizer used during CTC decoding process. Use this method when finetuning on from pretrained model. This method changes only decoder and leaves encoder and preprocessing modules unchanged. For example, you would use it if you want to use pretrained encoder when finetuning on a data in another language, or when you’d need model to learn capitalization, punctuation and/or special characters.
 Parameters
new_tokenizer_dir – Directory path to tokenizer or a config for a new tokenizer (if the tokenizer type is agg)
new_tokenizer_type – Either agg, bpe or wpe. bpe is used for SentencePiece tokenizers, whereas wpe is used for BertTokenizer.
new_tokenizer_cfg – A config for the new tokenizer. if provided, preempts the dir and type
Returns: None
 register_artifact(config_path: str, src: str, verify_src_exists: bool = True)
Register model artifacts with this function. These artifacts (files) will be included inside .nemo file when model.save_to(“mymodel.nemo”) is called.
How it works:
 It always returns existing absolute path which can be used during Model constructor call
EXCEPTION: src is None or “” in which case nothing will be done and src will be returned
It will add (config_path, model_utils.ArtifactItem()) pair to self.artifacts
If "src" is local existing path: then it will be returned in absolute path form. elif "src" starts with "nemo_file:unique_artifact_name": .nemo will be untarred to a temporary folder location and an actual existing path will be returned else: an error will be raised.
WARNING: use .register_artifact calls in your models’ constructors. The returned path is not guaranteed to exist after you have exited your model’s constructor.
 Parameters
config_path (str) – Artifact key. Usually corresponds to the model config.
src (str) – Path to artifact.
verify_src_exists (bool) – If set to False, then the artifact is optional and register_artifact will return None even if src is not found. Defaults to True.
 Returns
 Return type
If src is not None or empty it always returns absolute path which is guaranteed to exist during model instance life
str
 setup_optimization(optim_config: Optional[Union[omegaconf.DictConfig, Dict]] = None, optim_kwargs: Optional[Dict[str, Any]] = None)
Prepares an optimizer from a string name and its optional config parameters.
 Parameters
optim_config –
A dictionary containing the following keys:
”lr”: mandatory key for learning rate. Will raise ValueError if not provided.
”optimizer”: string name pointing to one of the available optimizers in the registry. If not provided, defaults to “adam”.
”opt_args”: Optional list of strings, in the format “arg_name=arg_value”. The list of “arg_value” will be parsed and a dictionary of optimizer kwargs will be built and supplied to instantiate the optimizer.
optim_kwargs – A dictionary with additional kwargs for the optimizer. Used for nonprimitive types that are not compatible with OmegaConf.
 setup_test_data(test_data_config: Optional[Union[omegaconf.DictConfig, Dict]])
Sets up the test data loader via a Dictlike object.
 Parameters
test_data_config – A config that contains the information regarding construction of an ASR Training dataset.
 Supported Datasets:
AudioToCharDALIDataset
 setup_training_data(train_data_config: Optional[Union[omegaconf.DictConfig, Dict]])
Sets up the training data loader via a Dictlike object.
 Parameters
train_data_config – A config that contains the information regarding construction of an ASR Training dataset.
 Supported Datasets:
AudioToCharDALIDataset
 setup_validation_data(val_data_config: Optional[Union[omegaconf.DictConfig, Dict]])
Sets up the validation data loader via a Dictlike object.
 Parameters
val_data_config – A config that contains the information regarding construction of an ASR Training dataset.
 Supported Datasets:
AudioToCharDALIDataset
 transcribe(audio: Union[str, List[str], torch.Tensor, numpy.ndarray, torch.utils.data.DataLoader], batch_size: int = 4, return_hypotheses: bool = False, num_workers: int = 0, channel_selector: Optional[Union[int, Iterable[int], str]] = None, augmentor: omegaconf.DictConfig = None, verbose: bool = True, override_config: Optional[nemo.collections.asr.parts.mixins.transcription.TranscribeConfig] = None) → Union[List[str], List[Hypothesis], Tuple[List[str]], Tuple[List[Hypothesis]]]
If modify this function, please remember update transcribe_partial_audio() in nemo/collections/asr/parts/utils/trancribe_utils.py
Uses greedy decoding to transcribe audio files. Use this method for debugging and prototyping.
 Parameters
audio – (a single or list) of paths to audio files or a np.ndarray audio array. Can also be a dataloader object that provides values that can be consumed by the model. Recommended length per file is between 5 and 25 seconds. But it is possible to pass a few hours long file if enough GPU memory is available.
batch_size – (int) batch size to use during inference. Bigger will result in better throughput performance but would use more memory.
return_hypotheses – (bool) Either return hypotheses or text With hypotheses can do some postprocessing like getting timestamp or rescoring
num_workers – (int) number of workers for DataLoader
channel_selector (int  Iterable[int]  str) – select a single channel or a subset of channels from multichannel audio. If set to ‘average’, it performs averaging across channels. Disabled if set to None. Defaults to None.
augmentor – (DictConfig): Augment audio samples during transcription if augmentor is applied.
verbose – (bool) whether to display tqdm progress bar
override_config – (Optional[TranscribeConfig]) override transcription config predefined by the user. Note: All other arguments in the function will be ignored if override_config is passed. You should call this argument as model.transcribe(audio, override_config=TranscribeConfig(…)).
 Returns
A list of transcriptions (or raw log probabilities if logprobs is True) in the same order as paths2audio_files
 class nemo.collections.asr.models.EncDecRNNTModel(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.models.asr_model.ASRModel
,nemo.collections.asr.parts.mixins.mixins.ASRModuleMixin
,nemo.collections.asr.models.asr_model.ExportableEncDecModel
,nemo.collections.asr.parts.mixins.transcription.ASRTranscriptionMixin
Base class for encoder decoder RNNTbased models.
 change_vocabulary(new_vocabulary: List[str], decoding_cfg: Optional[omegaconf.DictConfig] = None)
Changes vocabulary used during RNNT decoding process. Use this method when finetuning a pretrained model. This method changes only decoder and leaves encoder and preprocessing modules unchanged. For example, you would use it if you want to use pretrained encoder when finetuning on data in another language, or when you’d need model to learn capitalization, punctuation and/or special characters.
 Parameters
new_vocabulary – list with new vocabulary. Must contain at least 2 elements. Typically, this is target alphabet.
decoding_cfg – A config for the decoder, which is optional. If the decoding type needs to be changed (from say Greedy to Beam decoding etc), the config can be passed here.
Returns: None
 register_artifact(config_path: str, src: str, verify_src_exists: bool = True)
Register model artifacts with this function. These artifacts (files) will be included inside .nemo file when model.save_to(“mymodel.nemo”) is called.
How it works:
 It always returns existing absolute path which can be used during Model constructor call
EXCEPTION: src is None or “” in which case nothing will be done and src will be returned
It will add (config_path, model_utils.ArtifactItem()) pair to self.artifacts
If "src" is local existing path: then it will be returned in absolute path form. elif "src" starts with "nemo_file:unique_artifact_name": .nemo will be untarred to a temporary folder location and an actual existing path will be returned else: an error will be raised.
WARNING: use .register_artifact calls in your models’ constructors. The returned path is not guaranteed to exist after you have exited your model’s constructor.
 Parameters
config_path (str) – Artifact key. Usually corresponds to the model config.
src (str) – Path to artifact.
verify_src_exists (bool) – If set to False, then the artifact is optional and register_artifact will return None even if src is not found. Defaults to True.
 Returns
 Return type
If src is not None or empty it always returns absolute path which is guaranteed to exist during model instance life
str
 setup_optimization(optim_config: Optional[Union[omegaconf.DictConfig, Dict]] = None, optim_kwargs: Optional[Dict[str, Any]] = None)
Prepares an optimizer from a string name and its optional config parameters.
 Parameters
optim_config –
A dictionary containing the following keys:
”lr”: mandatory key for learning rate. Will raise ValueError if not provided.
”optimizer”: string name pointing to one of the available optimizers in the registry. If not provided, defaults to “adam”.
”opt_args”: Optional list of strings, in the format “arg_name=arg_value”. The list of “arg_value” will be parsed and a dictionary of optimizer kwargs will be built and supplied to instantiate the optimizer.
optim_kwargs – A dictionary with additional kwargs for the optimizer. Used for nonprimitive types that are not compatible with OmegaConf.
 setup_test_data(test_data_config: Optional[Union[omegaconf.DictConfig, Dict]])
Sets up the test data loader via a Dictlike object.
 Parameters
test_data_config – A config that contains the information regarding construction of an ASR Training dataset.
 Supported Datasets:
AudioToCharDALIDataset
 setup_training_data(train_data_config: Optional[Union[omegaconf.DictConfig, Dict]])
Sets up the training data loader via a Dictlike object.
 Parameters
train_data_config – A config that contains the information regarding construction of an ASR Training dataset.
 Supported Datasets:
AudioToCharDALIDataset
 setup_validation_data(val_data_config: Optional[Union[omegaconf.DictConfig, Dict]])
Sets up the validation data loader via a Dictlike object.
 Parameters
val_data_config – A config that contains the information regarding construction of an ASR Training dataset.
 Supported Datasets:
AudioToCharDALIDataset
 transcribe(audio: Union[str, List[str], numpy.ndarray, torch.utils.data.DataLoader], batch_size: int = 4, return_hypotheses: bool = False, partial_hypothesis: Optional[List[Hypothesis]] = None, num_workers: int = 0, channel_selector: Optional[Union[int, Iterable[int], str]] = None, augmentor: omegaconf.DictConfig = None, verbose: bool = True, override_config: Optional[nemo.collections.asr.parts.mixins.transcription.TranscribeConfig] = None) → Union[List[str], List[Hypothesis], Tuple[List[str]], Tuple[List[Hypothesis]]]
Uses greedy decoding to transcribe audio files. Use this method for debugging and prototyping.
 Parameters
audio – (a single or list) of paths to audio files or a np.ndarray audio array. Can also be a dataloader object that provides values that can be consumed by the model. Recommended length per file is between 5 and 25 seconds. But it is possible to pass a few hours long file if enough GPU memory is available.
batch_size – (int) batch size to use during inference. Bigger will result in better throughput performance but would use more memory.
return_hypotheses – (bool) Either return hypotheses or text With hypotheses can do some postprocessing like getting timestamp or rescoring
partial_hypothesis – Optional[List[‘Hypothesis’]]  A list of partial hypotheses to be used during rnnt decoding. This is useful for streaming rnnt decoding. If this is not None, then the length of this list should be equal to the length of the audio list.
num_workers – (int) number of workers for DataLoader
channel_selector (int  Iterable[int]  str) – select a single channel or a subset of channels from multichannel audio. If set to ‘average’, it performs averaging across channels. Disabled if set to None. Defaults to None. Uses zerobased indexing.
augmentor – (DictConfig): Augment audio samples during transcription if augmentor is applied.
verbose – (bool) whether to display tqdm progress bar
override_config – (Optional[TranscribeConfig]) override transcription config predefined by the user. Note: All other arguments in the function will be ignored if override_config is passed. You should call this argument as model.transcribe(audio, override_config=TranscribeConfig(…)).
 Returns
Returns a tuple of 2 items  * A list of greedy transcript texts / Hypothesis * An optional list of beam search transcript texts / Hypothesis / NBestHypothesis.
 class nemo.collections.asr.models.EncDecRNNTBPEModel(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.models.rnnt_models.EncDecRNNTModel
,nemo.collections.asr.parts.mixins.mixins.ASRBPEMixin
Base class for encoder decoder RNNTbased models with subword tokenization.
 change_vocabulary(new_tokenizer_dir: Union[str, omegaconf.DictConfig], new_tokenizer_type: str, decoding_cfg: Optional[omegaconf.DictConfig] = None)
Changes vocabulary used during RNNT decoding process. Use this method when finetuning on from pretrained model. This method changes only decoder and leaves encoder and preprocessing modules unchanged. For example, you would use it if you want to use pretrained encoder when finetuning on data in another language, or when you’d need model to learn capitalization, punctuation and/or special characters.
 Parameters
new_tokenizer_dir – Directory path to tokenizer or a config for a new tokenizer (if the tokenizer type is agg)
new_tokenizer_type – Type of tokenizer. Can be either agg, bpe or wpe.
decoding_cfg – A config for the decoder, which is optional. If the decoding type needs to be changed (from say Greedy to Beam decoding etc), the config can be passed here.
Returns: None
 register_artifact(config_path: str, src: str, verify_src_exists: bool = True)
Register model artifacts with this function. These artifacts (files) will be included inside .nemo file when model.save_to(“mymodel.nemo”) is called.
How it works:
 It always returns existing absolute path which can be used during Model constructor call
EXCEPTION: src is None or “” in which case nothing will be done and src will be returned
It will add (config_path, model_utils.ArtifactItem()) pair to self.artifacts
If "src" is local existing path: then it will be returned in absolute path form. elif "src" starts with "nemo_file:unique_artifact_name": .nemo will be untarred to a temporary folder location and an actual existing path will be returned else: an error will be raised.
WARNING: use .register_artifact calls in your models’ constructors. The returned path is not guaranteed to exist after you have exited your model’s constructor.
 Parameters
config_path (str) – Artifact key. Usually corresponds to the model config.
src (str) – Path to artifact.
verify_src_exists (bool) – If set to False, then the artifact is optional and register_artifact will return None even if src is not found. Defaults to True.
 Returns
 Return type
If src is not None or empty it always returns absolute path which is guaranteed to exist during model instance life
str
 setup_optimization(optim_config: Optional[Union[omegaconf.DictConfig, Dict]] = None, optim_kwargs: Optional[Dict[str, Any]] = None)
Prepares an optimizer from a string name and its optional config parameters.
 Parameters
optim_config –
A dictionary containing the following keys:
”lr”: mandatory key for learning rate. Will raise ValueError if not provided.
”optimizer”: string name pointing to one of the available optimizers in the registry. If not provided, defaults to “adam”.
”opt_args”: Optional list of strings, in the format “arg_name=arg_value”. The list of “arg_value” will be parsed and a dictionary of optimizer kwargs will be built and supplied to instantiate the optimizer.
optim_kwargs – A dictionary with additional kwargs for the optimizer. Used for nonprimitive types that are not compatible with OmegaConf.
 setup_test_data(test_data_config: Optional[Union[omegaconf.DictConfig, Dict]])
Sets up the test data loader via a Dictlike object.
 Parameters
test_data_config – A config that contains the information regarding construction of an ASR Training dataset.
 Supported Datasets:
AudioToCharDALIDataset
 setup_training_data(train_data_config: Optional[Union[omegaconf.DictConfig, Dict]])
Sets up the training data loader via a Dictlike object.
 Parameters
train_data_config – A config that contains the information regarding construction of an ASR Training dataset.
 Supported Datasets:
AudioToCharDALIDataset
 setup_validation_data(val_data_config: Optional[Union[omegaconf.DictConfig, Dict]])
Sets up the validation data loader via a Dictlike object.
 Parameters
val_data_config – A config that contains the information regarding construction of an ASR Training dataset.
 Supported Datasets:
AudioToCharDALIDataset
 transcribe(audio: Union[str, List[str], numpy.ndarray, torch.utils.data.DataLoader], batch_size: int = 4, return_hypotheses: bool = False, partial_hypothesis: Optional[List[Hypothesis]] = None, num_workers: int = 0, channel_selector: Optional[Union[int, Iterable[int], str]] = None, augmentor: omegaconf.DictConfig = None, verbose: bool = True, override_config: Optional[nemo.collections.asr.parts.mixins.transcription.TranscribeConfig] = None) → Union[List[str], List[Hypothesis], Tuple[List[str]], Tuple[List[Hypothesis]]]
Uses greedy decoding to transcribe audio files. Use this method for debugging and prototyping.
 Parameters
audio – (a single or list) of paths to audio files or a np.ndarray audio array. Can also be a dataloader object that provides values that can be consumed by the model. Recommended length per file is between 5 and 25 seconds. But it is possible to pass a few hours long file if enough GPU memory is available.
batch_size – (int) batch size to use during inference. Bigger will result in better throughput performance but would use more memory.
return_hypotheses – (bool) Either return hypotheses or text With hypotheses can do some postprocessing like getting timestamp or rescoring
partial_hypothesis – Optional[List[‘Hypothesis’]]  A list of partial hypotheses to be used during rnnt decoding. This is useful for streaming rnnt decoding. If this is not None, then the length of this list should be equal to the length of the audio list.
num_workers – (int) number of workers for DataLoader
channel_selector (int  Iterable[int]  str) – select a single channel or a subset of channels from multichannel audio. If set to ‘average’, it performs averaging across channels. Disabled if set to None. Defaults to None. Uses zerobased indexing.
augmentor – (DictConfig): Augment audio samples during transcription if augmentor is applied.
verbose – (bool) whether to display tqdm progress bar
override_config – (Optional[TranscribeConfig]) override transcription config predefined by the user. Note: All other arguments in the function will be ignored if override_config is passed. You should call this argument as model.transcribe(audio, override_config=TranscribeConfig(…)).
 Returns
Returns a tuple of 2 items  * A list of greedy transcript texts / Hypothesis * An optional list of beam search transcript texts / Hypothesis / NBestHypothesis.
 class nemo.collections.asr.models.EncDecClassificationModel(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.models.classification_models._EncDecBaseModel
Encoder decoder Classification models.
 register_artifact(config_path: str, src: str, verify_src_exists: bool = True)
Register model artifacts with this function. These artifacts (files) will be included inside .nemo file when model.save_to(“mymodel.nemo”) is called.
How it works:
 It always returns existing absolute path which can be used during Model constructor call
EXCEPTION: src is None or “” in which case nothing will be done and src will be returned
It will add (config_path, model_utils.ArtifactItem()) pair to self.artifacts
If "src" is local existing path: then it will be returned in absolute path form. elif "src" starts with "nemo_file:unique_artifact_name": .nemo will be untarred to a temporary folder location and an actual existing path will be returned else: an error will be raised.
WARNING: use .register_artifact calls in your models’ constructors. The returned path is not guaranteed to exist after you have exited your model’s constructor.
 Parameters
config_path (str) – Artifact key. Usually corresponds to the model config.
src (str) – Path to artifact.
verify_src_exists (bool) – If set to False, then the artifact is optional and register_artifact will return None even if src is not found. Defaults to True.
 Returns
 Return type
If src is not None or empty it always returns absolute path which is guaranteed to exist during model instance life
str
 setup_optimization(optim_config: Optional[Union[omegaconf.DictConfig, Dict]] = None, optim_kwargs: Optional[Dict[str, Any]] = None)
Prepares an optimizer from a string name and its optional config parameters.
 Parameters
optim_config –
A dictionary containing the following keys:
”lr”: mandatory key for learning rate. Will raise ValueError if not provided.
”optimizer”: string name pointing to one of the available optimizers in the registry. If not provided, defaults to “adam”.
”opt_args”: Optional list of strings, in the format “arg_name=arg_value”. The list of “arg_value” will be parsed and a dictionary of optimizer kwargs will be built and supplied to instantiate the optimizer.
optim_kwargs – A dictionary with additional kwargs for the optimizer. Used for nonprimitive types that are not compatible with OmegaConf.
 setup_test_data(test_data_config: Optional[Union[omegaconf.DictConfig, Dict]], use_feat: bool = False)
(Optionally) Setups data loader to be used in test
 Parameters
test_data_layer_config – test data layer parameters.
Returns:
 setup_training_data(train_data_config: Optional[Union[omegaconf.DictConfig, Dict]])
Setups data loader to be used in training
 Parameters
train_data_layer_config – training data layer parameters.
Returns:
 setup_validation_data(val_data_config: Optional[Union[omegaconf.DictConfig, Dict]])
Setups data loader to be used in validation :param val_data_layer_config: validation data layer parameters.
Returns:
 class nemo.collections.asr.models.EncDecSpeakerLabelModel(*args: Any, **kwargs: Any)
Bases:
nemo.core.classes.modelPT.ModelPT
,nemo.collections.asr.models.asr_model.ExportableEncDecModel
Encoder decoder class for speaker label models. Model class creates training, validation methods for setting up data performing model forward pass. Expects config dict for
preprocessor
Jasper/Quartznet Encoder
Speaker Decoder
 register_artifact(config_path: str, src: str, verify_src_exists: bool = True)
Register model artifacts with this function. These artifacts (files) will be included inside .nemo file when model.save_to(“mymodel.nemo”) is called.
How it works:
 It always returns existing absolute path which can be used during Model constructor call
EXCEPTION: src is None or “” in which case nothing will be done and src will be returned
It will add (config_path, model_utils.ArtifactItem()) pair to self.artifacts
If "src" is local existing path: then it will be returned in absolute path form. elif "src" starts with "nemo_file:unique_artifact_name": .nemo will be untarred to a temporary folder location and an actual existing path will be returned else: an error will be raised.
WARNING: use .register_artifact calls in your models’ constructors. The returned path is not guaranteed to exist after you have exited your model’s constructor.
 Parameters
config_path (str) – Artifact key. Usually corresponds to the model config.
src (str) – Path to artifact.
verify_src_exists (bool) – If set to False, then the artifact is optional and register_artifact will return None even if src is not found. Defaults to True.
 Returns
 Return type
If src is not None or empty it always returns absolute path which is guaranteed to exist during model instance life
str
 setup_optimization(optim_config: Optional[Union[omegaconf.DictConfig, Dict]] = None, optim_kwargs: Optional[Dict[str, Any]] = None)
Prepares an optimizer from a string name and its optional config parameters.
 Parameters
optim_config –
A dictionary containing the following keys:
”lr”: mandatory key for learning rate. Will raise ValueError if not provided.
”optimizer”: string name pointing to one of the available optimizers in the registry. If not provided, defaults to “adam”.
”opt_args”: Optional list of strings, in the format “arg_name=arg_value”. The list of “arg_value” will be parsed and a dictionary of optimizer kwargs will be built and supplied to instantiate the optimizer.
optim_kwargs – A dictionary with additional kwargs for the optimizer. Used for nonprimitive types that are not compatible with OmegaConf.
 setup_test_data(test_data_layer_params: Optional[Union[omegaconf.DictConfig, Dict]])
(Optionally) Setups data loader to be used in test
 Parameters
test_data_layer_config – test data layer parameters.
Returns:
 setup_training_data(train_data_layer_config: Optional[Union[omegaconf.DictConfig, Dict]])
Setups data loader to be used in training
 Parameters
train_data_layer_config – training data layer parameters.
Returns:
 setup_validation_data(val_data_layer_config: Optional[Union[omegaconf.DictConfig, Dict]])
Setups data loader to be used in validation :param val_data_layer_config: validation data layer parameters.
Returns:
 class nemo.collections.asr.models.hybrid_asr_tts_models.ASRWithTTSModel(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.models.asr_model.ASRModel
Hybrid ASRTTS model: a transparent wrapper for ASR model with frozen texttospectrogram pretrained model, which allows to use textonly data for training/finetuning Textonly data can be mixed with audiotext pairs
 classmethod from_asr_config(asr_cfg: omegaconf.DictConfig, asr_model_type: Union[str, nemo.collections.asr.models.hybrid_asr_tts_models.ASRWithTTSModel.ASRModelTypes], tts_model_path: Union[str, pathlib.Path], enhancer_model_path: Optional[Union[pathlib.Path, str]] = None, trainer: Optional[pytorch_lightning.Trainer] = None)
Method to construct model from ASR config for training from scratch
 classmethod from_pretrained_models(asr_model_path: Union[str, pathlib.Path], tts_model_path: Union[str, pathlib.Path], enhancer_model_path: Optional[Union[pathlib.Path, str]] = None, asr_model_fuse_bn: bool = False, cfg: Optional[omegaconf.DictConfig] = None, trainer: Optional[pytorch_lightning.Trainer] = None)
Load model from pretrained ASR and TTS models :param asr_model_path: path to .nemo ASR model checkpoint :param tts_model_path: path to .nemo TTS model checkpoint :param enhancer_model_path: path to .nemo enhancer model checkpoint :param asr_model_fuse_bn: automatically fuse batchnorm layers in ASR model :param cfg: optional config for hybrid model :param trainer: PytorchLightning trainer
 Returns
ASRWithTTSModel instance
 save_asr_model_to(save_path: str)
Save ASR model separately
 setup_training_data(train_data_config: Optional[Union[omegaconf.DictConfig, Dict]])
Setup training data from config: textonly, audiotext or mixed data.
 class nemo.collections.asr.models.confidence_ensemble.ConfidenceEnsembleModel(*args: Any, **kwargs: Any)
Bases:
nemo.core.classes.modelPT.ModelPT
Implementation of the confidence ensemble model.
See https://arxiv.org/abs/2306.15824 for details.
NoteCurrently this class only support transcribe method as it requires fullutterance confidence scores to operate.
 transcribe(paths2audio_files: List[str], batch_size: int = 4, return_hypotheses: bool = False, num_workers: int = 0, channel_selector: Optional[Union[int, Iterable[int], str]] = None, augmentor: Optional[omegaconf.DictConfig] = None, verbose: bool = True, **kwargs) → List[str]
Run all models (TODO: in parallel)
Compute confidence for each model
Use logistic regression to pick the “most confident” model
Return the output of that model
Confidenceensemble transcribe method.
Consists of the following steps:
 class nemo.collections.asr.modules.ConvASREncoder(*args: Any, **kwargs: Any)
Bases:
nemo.core.classes.module.NeuralModule
,nemo.core.classes.exportable.Exportable
,nemo.core.classes.mixins.access_mixins.AccessMixin
Convolutional encoder for ASR models. With this class you can implement JasperNet and QuartzNet models.
 Based on these papers:
https://arxiv.org/pdf/1904.03288.pdf https://arxiv.org/pdf/1910.10261.pdf
 input_example(max_batch=1, max_dim=8192)
Generates input examples for tracing etc. :returns: A tuple of input examples.
 property input_types
Returns definitions of module input ports.
 property output_types
Returns definitions of module output ports.
 class nemo.collections.asr.modules.ConvASRDecoder(*args: Any, **kwargs: Any)
Bases:
nemo.core.classes.module.NeuralModule
,nemo.core.classes.exportable.Exportable
,nemo.core.classes.mixins.adapter_mixins.AdapterModuleMixin
Simple ASR Decoder for use with CTCbased models such as JasperNet and QuartzNet
 Based on these papers:
https://arxiv.org/pdf/1904.03288.pdf https://arxiv.org/pdf/1910.10261.pdf https://arxiv.org/pdf/2005.04290.pdf
 add_adapter(name: str, cfg: omegaconf.DictConfig)
Add an Adapter module to this module.
 Parameters
name – A globally unique name for the adapter. Will be used to access, enable and disable adapters.
cfg – A DictConfig or Dataclass that contains at the bare minimum __target__ to instantiate a new Adapter module.
 input_example(max_batch=1, max_dim=256)
Generates input examples for tracing etc. :returns: A tuple of input examples.
 property input_types
Define these to enable input neural type checks
 property output_types
Define these to enable output neural type checks
 class nemo.collections.asr.modules.ConvASRDecoderClassification(*args: Any, **kwargs: Any)
Bases:
nemo.core.classes.module.NeuralModule
,nemo.core.classes.exportable.Exportable
Simple ASR Decoder for use with classification models such as JasperNet and QuartzNet
 Based on these papers:
 input_example(max_batch=1, max_dim=256)
Generates input examples for tracing etc. :returns: A tuple of input examples.
 property input_types
Define these to enable input neural type checks
 property output_types
Define these to enable output neural type checks
 class nemo.collections.asr.modules.SpeakerDecoder(*args: Any, **kwargs: Any)
Bases:
nemo.core.classes.module.NeuralModule
,nemo.core.classes.exportable.Exportable
Speaker Decoder creates the final neural layers that maps from the outputs of Jasper Encoder to the embedding layer followed by speaker based softmax loss.
 Parameters
feat_in (int) – Number of channels being input to this module
num_classes (int) – Number of unique speakers in dataset
emb_sizes (list) – shapes of intermediate embedding layers (we consider speaker embbeddings from 1st of this layers) Defaults to [1024,1024]
pool_mode (str) – Pooling strategy type. options are ‘xvector’,’tap’, ‘attention’ Defaults to ‘xvector (mean and variance)’ tap (temporal average pooling: just mean) attention (attention based pooling)
init_mode (str) – Describes how neural network parameters are initialized. Options are [‘xavier_uniform’, ‘xavier_normal’, ‘kaiming_uniform’,’kaiming_normal’]. Defaults to “xavier_uniform”.
 input_example(max_batch=1, max_dim=256)
Generates input examples for tracing etc. :returns: A tuple of input examples.
 property input_types
Define these to enable input neural type checks
 property output_types
Define these to enable output neural type checks
 class nemo.collections.asr.modules.ConformerEncoder(*args: Any, **kwargs: Any)
Bases:
nemo.core.classes.module.NeuralModule
,nemo.collections.asr.parts.mixins.streaming.StreamingEncoder
,nemo.core.classes.exportable.Exportable
,nemo.core.classes.mixins.access_mixins.AccessMixin
The encoder for ASR model of Conformer. Based on this paper: ‘Conformer: Convolutionaugmented Transformer for Speech Recognition’ by Anmol Gulati et al. https://arxiv.org/abs/2005.08100
 Parameters
feat_in (int) – the size of feature channels
n_layers (int) – number of layers of ConformerBlock
d_model (int) – the hidden size of the model
feat_out (int) – the size of the output features Defaults to 1 (means feat_out is d_model)
subsampling (str) – the method of subsampling, choices=[‘vggnet’, ‘striding’, ‘dwstriding’, ‘stacking’, ‘stacking_norm’] Defaults to striding.
subsampling_factor (int) – the subsampling factor which should be power of 2 Defaults to 4.
subsampling_conv_chunking_factor (int) – optionally, force chunk inputs (helpful for large inputs) Should be power of 2, 1 (autochunking, default), or 1 (no chunking)
subsampling_conv_channels (int) – the size of the convolutions in the subsampling module Defaults to 1 which would set it to d_model.
reduction (str, Optional) – the method of reduction, choices=[‘pooling’, ‘striding’]. If no value is passed, then no reduction is performed and the models runs with the original 4x subsampling.
reduction_position (int, Optional) – the index of the layer to apply reduction. If 1, apply reduction at the end.
reduction_factor (int) – the reduction factor which should be either 1 or a power of 2 Defaults to 1.
ff_expansion_factor (int) – the expansion factor in feed forward layers Defaults to 4.
self_attention_model (str) –
type of the attention layer and positional encoding
 ’rel_pos’:
 ’rel_pos_local_attn’:
 ’abs_pos’:
relative positional embedding and TransformerXL
relative positional embedding and TransformerXL with local attention using overlapping chunks. Attention context is determined by att_context_size parameter.
absolute positional embedding and Transformer
Default is rel_pos.
pos_emb_max_len (int) – the maximum length of positional embeddings Defaults to 5000
n_heads (int) – number of heads in multiheaded attention layers Defaults to 4.
att_context_size (List[Union[List[int],int]]) – specifies the context sizes on each side. Each context size should be a list of two integers like [100,100]. A list of context sizes like [[100,100],[100,50]] can also be passed. 1 means unlimited context. Defaults to [1,1]
att_context_probs (List[float]) – a list of probabilities of each one of the att_context_size when a list of them is passed. If not specified, uniform distribution is being used. Defaults to None
att_context_style (str) – ‘regular’ or ‘chunked_limited’. Defaults to ‘regular’
xscaling (bool) – enables scaling the inputs to the multiheaded attention layers by sqrt(d_model) Defaults to True.
untie_biases (bool) – whether to not share (untie) the bias weights between layers of TransformerXL Defaults to True.
conv_kernel_size (int) – the size of the convolutions in the convolutional modules Defaults to 31.
conv_norm_type (str) – the type of the normalization in the convolutional modules Defaults to ‘batch_norm’.
conv_context_size (list) – it can be”causal” or a list of two integers while conv_context_size[0]+conv_context_size[1]+1==conv_kernel_size. None means [(conv_kernel_size1)//2, (conv_kernel_size1)//2], and ‘causal’ means [(conv_kernel_size1), 0]. Defaults to None.
conv_dual_mode (bool) – specifies if convolution should be dual mode when dual_offline mode is being used. When enables, the left half of the convolution kernel would get masked in streaming cases. Defaults to False
dropout (float) – the dropout rate used in all layers except the attention layers Defaults to 0.1.
dropout_pre_encoder (float) – the dropout rate used before the encoder Defaults to 0.1.
dropout_emb (float) – the dropout rate used for the positional embeddings Defaults to 0.1.
dropout_att (float) – the dropout rate used for the attention layer Defaults to 0.0.
stochastic_depth_drop_prob (float) – if nonzero, will randomly drop layers during training. The higher this value, the more often layers are dropped. Defaults to 0.0.
stochastic_depth_mode (str) – can be either “linear” or “uniform”. If set to “uniform”, all layers have the same probability of drop. If set to “linear”, the drop probability grows linearly from 0 for the first layer to the desired value for the final layer. Defaults to “linear”.
stochastic_depth_start_layer (int) – starting layer for stochastic depth. All layers before this will never be dropped. Note that drop probability will be adjusted accordingly if mode is “linear” when start layer is > 1. Defaults to 1.
global_tokens (int) – number of tokens to be used for global attention. Only relevant if self_attention_model is ‘rel_pos_local_attn’. Defaults to 0.
global_tokens_spacing (int) – how far apart the global tokens are Defaults to 1.
global_attn_separate (bool) – whether the q, k, v layers used for global tokens should be separate. Defaults to False.
 change_attention_model(self_attention_model: Optional[str] = None, att_context_size: Optional[List[int]] = None, update_config: bool = True, device: Optional[torch.device] = None)
Update the self_attention_model which changes the positional encoding and attention layers.
 Parameters
self_attention_model (str) –
type of the attention layer and positional encoding
 ’rel_pos’:
 ’rel_pos_local_attn’:
 ’abs_pos’:
relative positional embedding and TransformerXL
relative positional embedding and TransformerXL with local attention using overlapping windows. Attention context is determined by att_context_size parameter.
absolute positional embedding and Transformer
If None is provided, the self_attention_model isn’t changed. Defaults to None.
att_context_size (List[int]) – List of 2 ints corresponding to left and right attention context sizes, or None to keep as it is. Defaults to None.
update_config (bool) – Whether to update the config or not with the new attention model. Defaults to True.
device (torch.device) – If provided, new layers will be moved to the device. Defaults to None.
 change_subsampling_conv_chunking_factor(subsampling_conv_chunking_factor: int)
Update the conv_chunking_factor (int) Default is 1 (auto) Set it to 1 (disabled) or to a specific value (power of 2) if you OOM in the conv subsampling layers
 Parameters
subsampling_conv_chunking_factor (int) –
 property disabled_deployment_input_names
Implement this method to return a set of input names disabled for export
 property disabled_deployment_output_names
Implement this method to return a set of output names disabled for export
 input_example(max_batch=1, max_dim=256)
Generates input examples for tracing etc. :returns: A tuple of input examples.
 property input_types
Returns definitions of module input ports.
 property input_types_for_export
Returns definitions of module input ports.
 property output_types
Returns definitions of module output ports.
 property output_types_for_export
Returns definitions of module output ports.
 set_max_audio_length(max_audio_length)
Sets maximum input length. Precalculates internal seq_range mask.
 setup_streaming_params(chunk_size: Optional[int] = None, shift_size: Optional[int] = None, left_chunks: Optional[int] = None, att_context_size: Optional[list] = None, max_context: int = 10000)
This function sets the needed values and parameters to perform streaming. The configuration would be stored in self.streaming_cfg. The streaming configuration is needed to simulate streaming inference.
 Parameters
chunk_size (int) – overrides the chunk size
shift_size (int) – overrides the shift size for chunks
left_chunks (int) – overrides the number of left chunks visible to each chunk
max_context (int) – the value used for the cache size of last_channel layers if left context is set to infinity (1) Defaults to 1 (means feat_out is d_model)
 class nemo.collections.asr.modules.SqueezeformerEncoder(*args: Any, **kwargs: Any)
Bases:
nemo.core.classes.module.NeuralModule
,nemo.core.classes.exportable.Exportable
,nemo.core.classes.mixins.access_mixins.AccessMixin
The encoder for ASR model of Squeezeformer. Based on this paper: ‘Squeezeformer: An Efficient Transformer for Automatic Speech Recognition’ by Sehoon Kim et al. https://arxiv.org/abs/2206.00888
 Parameters
feat_in (int) – the size of feature channels
n_layers (int) – number of layers of ConformerBlock
d_model (int) – the hidden size of the model
feat_out (int) – the size of the output features Defaults to 1 (means feat_out is d_model)
subsampling (str) – the method of subsampling, choices=[‘vggnet’, ‘striding’, ‘dw_striding’] Defaults to dw_striding.
subsampling_factor (int) – the subsampling factor which should be power of 2 Defaults to 4.
subsampling_conv_channels (int) – the size of the convolutions in the subsampling module Defaults to 1 which would set it to d_model.
ff_expansion_factor (int) – the expansion factor in feed forward layers Defaults to 4.
self_attention_model (str) – type of the attention layer and positional encoding ‘rel_pos’: relative positional embedding and TransformerXL ‘abs_pos’: absolute positional embedding and Transformer default is rel_pos.
pos_emb_max_len (int) – the maximum length of positional embeddings Defaulst to 5000
n_heads (int) – number of heads in multiheaded attention layers Defaults to 4.
xscaling (bool) – enables scaling the inputs to the multiheaded attention layers by sqrt(d_model) Defaults to True.
untie_biases (bool) – whether to not share (untie) the bias weights between layers of TransformerXL Defaults to True.
conv_kernel_size (int) – the size of the convolutions in the convolutional modules Defaults to 31.
conv_norm_type (str) – the type of the normalization in the convolutional modules Defaults to ‘batch_norm’.
dropout (float) – the dropout rate used in all layers except the attention layers Defaults to 0.1.
dropout_emb (float) – the dropout rate used for the positional embeddings Defaults to 0.1.
dropout_att (float) – the dropout rate used for the attention layer Defaults to 0.0.
adaptive_scale (bool) – Whether to scale the inputs to each component by affine scale and bias layer. Or use a fixed scale=1 and bias=0.
time_reduce_idx (int) – Optional integer index of a layer where a time reduction operation will occur. All operations beyond this point will only occur at the reduced resolution.
time_recovery_idx (int) – Optional integer index of a layer where the time recovery operation will occur. All operations beyond this point will occur at the original resolution (resolution after primary downsampling). If no value is provided, assumed to be the last layer.
 input_example(max_batch=1, max_dim=256)
Generates input examples for tracing etc. :returns: A tuple of input examples.
 property input_types
Returns definitions of module input ports.
 make_pad_mask(max_audio_length, seq_lens)
Make masking for padding.
 property output_types
Returns definitions of module output ports.
 set_max_audio_length(max_audio_length)
Sets maximum input length. Precalculates internal seq_range mask.
 class nemo.collections.asr.modules.RNNEncoder(*args: Any, **kwargs: Any)
Bases:
nemo.core.classes.module.NeuralModule
,nemo.core.classes.exportable.Exportable
The RNNbased encoder for ASR models. Followed the architecture suggested in the following paper: ‘STREAMING ENDTOEND SPEECH RECOGNITION FOR MOBILE DEVICES’ by Yanzhang He et al. https://arxiv.org/pdf/1811.06621.pdf
 Parameters
feat_in (int) – the size of feature channels
n_layers (int) – number of layers of RNN
d_model (int) – the hidden size of the model
proj_size (int) – the size of the output projection after each RNN layer
rnn_type (str) – the type of the RNN layers, choices=[‘lstm, ‘gru’, ‘rnn’]
bidirectional (float) – specifies whether RNN layers should be bidirectional or not Defaults to True.
feat_out (int) – the size of the output features Defaults to 1 (means feat_out is d_model)
subsampling (str) – the method of subsampling, choices=[‘stacking, ‘vggnet’, ‘striding’] Defaults to stacking.
subsampling_factor (int) – the subsampling factor Defaults to 4.
subsampling_conv_channels (int) – the size of the convolutions in the subsampling module for vggnet and striding Defaults to 1 which would set it to d_model.
dropout (float) – the dropout rate used between all layers Defaults to 0.2.
 input_example()
Generates input examples for tracing etc. :returns: A tuple of input examples.
 property input_types
Returns definitions of module input ports.
 property output_types
Returns definitions of module output ports.
 class nemo.collections.asr.modules.RNNTDecoder(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.modules.rnnt_abstract.AbstractRNNTDecoder
,nemo.core.classes.exportable.Exportable
,nemo.core.classes.mixins.adapter_mixins.AdapterModuleMixin
A Recurrent Neural Network Transducer Decoder / Prediction Network (RNNT Prediction Network). An RNNT Decoder/Prediction network, comprised of a stateful LSTM model.
 Parameters
prednet –
A dictlike object which contains the following keyvalue pairs.
 pred_hidden:
 pred_rnn_layers:
int specifying the hidden dimension of the prediction net.
int specifying the number of rnn layers.
Optionally, it may also contain the following:
 forget_gate_bias:
 t_max:
 weights_init_scale:
 hidden_hidden_bias_scale:
 dropout:
float, set by default to 1.0, which constructs a forget gate initialized to 1.0. Reference: [An Empirical Exploration of Recurrent Network Architectures](http://proceedings.mlr.press/v37/jozefowicz15.pdf)
int value, set to None by default. If an int is specified, performs Chrono Initialization of the LSTM network, based on the maximum number of timesteps t_max expected during the course of training. Reference: [Can recurrent neural networks warp time?](https://openreview.net/forum?id=SJcKhkAb)
Float scale of the weights after initialization. Setting to lower than one sometimes helps reduce variance between runs.
Float scale for the hiddentohidden bias scale. Set to 0.0 for the default behaviour.
float, set to 0.0 by default. Optional dropout applied at the end of the final LSTM RNN layer.
vocab_size – int, specifying the vocabulary size of the embedding layer of the Prediction network, excluding the RNNT blank token.
normalization_mode – Can be either None, ‘batch’ or ‘layer’. By default, is set to None. Defines the type of normalization applied to the RNN layer.
random_state_sampling – bool, set to False by default. When set, provides normaldistribution sampled state tensors instead of zero tensors during training. Reference: [Recognizing longform speech using streaming endtoend models](https://arxiv.org/abs/1910.11455)
blank_as_pad –
bool, set to True by default. When set, will add a token to the Embedding layer of this prediction network, and will treat this token as a pad token. In essence, the RNNT pad token will be treated as a pad token, and the embedding layer will return a zero tensor for this token.
It is set by default as it enables various batch optimizations required for batched beam search. Therefore, it is not recommended to disable this flag.
 add_adapter(name: str, cfg: omegaconf.DictConfig)
Add an Adapter module to this module.
 Parameters
name – A globally unique name for the adapter. Will be used to access, enable and disable adapters.
cfg – A DictConfig or Dataclass that contains at the bare minimum __target__ to instantiate a new Adapter module.
 batch_concat_states(batch_states: List[List[torch.Tensor]]) → List[torch.Tensor]
Concatenate a batch of decoder state to a packed state.
 Parameters
 Returns
 decoder states
(L x B x H, L x B x H)
 Return type
batch_states (list) – batch of decoder states B x ([L x (H)], [L x (H)])
(tuple)
 batch_copy_states(old_states: List[torch.Tensor], new_states: List[torch.Tensor], ids: List[int], value: Optional[float] = None) → List[torch.Tensor]
Copy states from new state to old state at certain indices.
 Parameters
old_states (list) – packed decoder states (L x B x H, L x B x H)
new_states – packed decoder states (L x B x H, L x B x H)
ids (list) – List of indices to copy states at.
value (optional float) – If a value should be copied instead of a state slice, a float should be provided
 Returns
 batch of decoder states with partial copy at ids (or a specific value).
(L x B x H, L x B x H)
 batch_initialize_states(batch_states: List[torch.Tensor], decoder_states: List[List[torch.Tensor]])
Create batch of decoder states.
 Parameters
batch_states (list) – batch of decoder states ([L x (B, H)], [L x (B, H)])
decoder_states (list of list) – list of decoder states [B x ([L x (1, H)], [L x (1, H)])]
 Returns
 batch of decoder states
([L x (B, H)], [L x (B, H)])
 Return type
batch_states (tuple)
 classmethod batch_replace_states_all(src_states: Tuple[torch.Tensor, torch.Tensor], dst_states: Tuple[torch.Tensor, torch.Tensor])
Replace states in dst_states with states from src_states
 classmethod batch_replace_states_mask(src_states: Tuple[torch.Tensor, torch.Tensor], dst_states: Tuple[torch.Tensor, torch.Tensor], mask: torch.Tensor)
Replace states in dst_states with states from src_states using the mask
 batch_score_hypothesis(hypotheses: List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis], cache: Dict[Tuple[int], Any], batch_states: List[torch.Tensor]) → Tuple[torch.Tensor, List[torch.Tensor], torch.Tensor]
Used for batched beam search algorithms. Similar to score_hypothesis method.
 Parameters
hypothesis – List of Hypotheses. Refer to rnnt_utils.Hypothesis.
cache – Dict which contains a cache to avoid duplicate computations.
batch_states – List of torch.Tensor which represent the states of the RNN for this batch. Each state is of shape [L, B, H]
 Returns
 Return type
b_y is a torch.Tensor of shape [B, 1, H] representing the scores of the last tokens in the Hypotheses. b_state is a list of list of RNN states, each of shape [L, B, H]. Represented as B x List[states]. lm_token is a list of the final integer tokens of the hypotheses in the batch.
Returns a tuple (b_y, b_states, lm_tokens) such that
 batch_select_state(batch_states: List[torch.Tensor], idx: int) → List[List[torch.Tensor]]
Get decoder state from batch of states, for given id.
 Parameters
batch_states (list) – batch of decoder states ([L x (B, H)], [L x (B, H)])
idx (int) – index to extract state from batch of states
 Returns
 decoder states for given id
([L x (1, H)], [L x (1, H)])
 Return type
(tuple)
 batch_split_states(batch_states: Tuple[torch.Tensor, torch.Tensor]) → list[Tuple[torch.Tensor, torch.Tensor]]
Split states into a list of states. Useful for splitting the final state for converting results of the decoding algorithm to Hypothesis class.
 initialize_state(y: torch.Tensor) → Tuple[torch.Tensor, torch.Tensor]
Initialize the state of the LSTM layers, with same dtype and device as input y. LSTM accepts a tuple of 2 tensors as a state.
 Parameters
 Returns
y – A torch.Tensor whose device the generated states will be placed on.
Tuple of 2 tensors, each of shape [L, B, H], where
L = Number of RNN layers
B = Batch size
H = Hidden size of RNN.
 input_example(max_batch=1, max_dim=1)
Generates input examples for tracing etc. :returns: A tuple of input examples.
 property input_types
Returns definitions of module input ports.
 mask_select_states(states: Tuple[torch.Tensor, torch.Tensor], mask: torch.Tensor) → Tuple[torch.Tensor, torch.Tensor]
Return states by mask selection :param states: states for the batch :param mask: boolean mask for selecting states; batch dimension should be the same as for states
 Returns
states filtered by mask
 property output_types
Returns definitions of module output ports.
 predict(y: Optional[torch.Tensor] = None, state: Optional[List[torch.Tensor]] = None, add_sos: bool = True, batch_size: Optional[int] = None) → Tuple[torch.Tensor, List[torch.Tensor]]
Stateful prediction of scores and state for a (possibly null) tokenset. This method takes various cases into consideration :  No token, no state  used for priming the RNN  No token, state provided  used for blank token scoring  Given token, states  used for scores + new states
Here: B  batch size U  label length H  Hidden dimension size of RNN L  Number of RNN layers
 Parameters
y – Optional torch tensor of shape [B, U] of dtype long which will be passed to the Embedding. If None, creates a zero tensor of shape [B, 1, H] which mimics output of padtoken on EmbeddiNg.
state – An optional list of states for the RNN. Eg: For LSTM, it is the state list length is 2. Each state must be a tensor of shape [L, B, H]. If None, and during training mode and random_state_sampling is set, will sample a normal distribution tensor of the above shape. Otherwise, None will be passed to the RNN.
add_sos – bool flag, whether a zero vector describing a “start of signal” token should be prepended to the above “y” tensor. When set, output size is (B, U + 1, H).
batch_size – An optional int, specifying the batch size of the y tensor. Can be infered if y and state is None. But if both are None, then batch_size cannot be None.
 Returns
A tuple (g, hid) such that 
If add_sos is False:
 g:
 hid:
(B, U, H)
(h, c) where h is the final sequence hidden state and c is the final cell state:
h (tensor), shape (L, B, H)
c (tensor), shape (L, B, H)
 If add_sos is True:
 g:
 hid:
(B, U + 1, H)
(h, c) where h is the final sequence hidden state and c is the final cell state:
h (tensor), shape (L, B, H)
c (tensor), shape (L, B, H)
 score_hypothesis(hypothesis: nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis, cache: Dict[Tuple[int], Any]) → Tuple[torch.Tensor, List[torch.Tensor], torch.Tensor]
Similar to the predict() method, instead this method scores a Hypothesis during beam search. Hypothesis is a dataclass representing one hypothesis in a Beam Search.
 Parameters
hypothesis – Refer to rnnt_utils.Hypothesis.
cache – Dict which contains a cache to avoid duplicate computations.
 Returns
 Return type
y is a torch.Tensor of shape [1, 1, H] representing the score of the last token in the Hypothesis. state is a list of RNN states, each of shape [L, 1, H]. lm_token is the final integer token of the hypothesis.
Returns a tuple (y, states, lm_token) such that
 class nemo.collections.asr.modules.StatelessTransducerDecoder(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.modules.rnnt_abstract.AbstractRNNTDecoder
,nemo.core.classes.exportable.Exportable
A Stateless Neural Network Transducer Decoder / Prediction Network. An RNNT Decoder/Prediction stateless network that simply takes concatenation of embeddings of the history tokens as the output.
 Parameters
prednet –
A dictlike object which contains the following keyvalue pairs. pred_hidden: int specifying the hidden dimension of the prediction net.
dropout: float, set to 0.0 by default. Optional dropout applied at the end of the final LSTM RNN layer.
vocab_size – int, specifying the vocabulary size of the embedding layer of the Prediction network, excluding the RNNT blank token.
context_size – int, specifying the size of the history context used for this decoder.
normalization_mode – Can be either None, ‘layer’. By default, is set to None. Defines the type of normalization applied to the RNN layer.
 batch_concat_states(batch_states: List[List[torch.Tensor]]) → List[torch.Tensor]
Concatenate a batch of decoder state to a packed state.
 Parameters
 Returns
 decoder states
[(B x C)]
 Return type
batch_states (list) – batch of decoder states B x ([(C)]
(tuple)
 batch_copy_states(old_states: List[torch.Tensor], new_states: List[torch.Tensor], ids: List[int], value: Optional[float] = None) → List[torch.Tensor]
Copy states from new state to old state at certain indices.
 Parameters
old_states – packed decoder states single element list of (B x C)
new_states – packed decoder states single element list of (B x C)
ids (list) – List of indices to copy states at.
value (optional float) – If a value should be copied instead of a state slice, a float should be provided
 Returns
batch of decoder states with partial copy at ids (or a specific value). (B x C)
 batch_initialize_states(batch_states: List[torch.Tensor], decoder_states: List[List[torch.Tensor]])
Create batch of decoder states.
 Parameters
batch_states (list) – batch of decoder states ([(B, H)])
decoder_states (list of list) – list of decoder states [B x ([(1, C)]]
 Returns
 batch of decoder states
([(B, C)])
 Return type
batch_states (tuple)
 classmethod batch_replace_states_all(src_states: list[torch.Tensor], dst_states: list[torch.Tensor])
Replace states in dst_states with states from src_states
 classmethod batch_replace_states_mask(src_states: list[torch.Tensor], dst_states: list[torch.Tensor], mask: torch.Tensor)
Replace states in dst_states with states from src_states using the mask
 batch_score_hypothesis(hypotheses: List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis], cache: Dict[Tuple[int], Any], batch_states: List[torch.Tensor]) → Tuple[torch.Tensor, List[torch.Tensor], torch.Tensor]
Used for batched beam search algorithms. Similar to score_hypothesis method.
 Parameters
hypothesis – List of Hypotheses. Refer to rnnt_utils.Hypothesis.
cache – Dict which contains a cache to avoid duplicate computations.
batch_states – List of torch.Tensor which represent the states of the RNN for this batch. Each state is of shape [L, B, H]
 Returns
 Return type
b_y is a torch.Tensor of shape [B, 1, H] representing the scores of the last tokens in the Hypotheses. b_state is a list of list of RNN states, each of shape [L, B, H]. Represented as B x List[states]. lm_token is a list of the final integer tokens of the hypotheses in the batch.
Returns a tuple (b_y, b_states, lm_tokens) such that
 batch_select_state(batch_states: List[torch.Tensor], idx: int) → List[List[torch.Tensor]]
Get decoder state from batch of states, for given id.
 Parameters
batch_states (list) – batch of decoder states [(B, C)]
idx (int) – index to extract state from batch of states
 Returns
 decoder states for given id
[(C)]
 Return type
(tuple)
 batch_split_states(batch_states: list[torch.Tensor]) → list[list[torch.Tensor]]
Split states into a list of states. Useful for splitting the final state for converting results of the decoding algorithm to Hypothesis class.
 initialize_state(y: torch.Tensor) → List[torch.Tensor]
Initialize the state of the RNN layers, with same dtype and device as input y.
 Parameters
 Returns
 List of torch.Tensor, each of shape [L, B, H], where
L = Number of RNN layers B = Batch size H = Hidden size of RNN.
y – A torch.Tensor whose device the generated states will be placed on.
 input_example(max_batch=1, max_dim=1)
Generates input examples for tracing etc. :returns: A tuple of input examples.
 property input_types
Returns definitions of module input ports.
 mask_select_states(states: Optional[List[torch.Tensor]], mask: torch.Tensor) → Optional[List[torch.Tensor]]
Return states by mask selection :param states: states for the batch :param mask: boolean mask for selecting states; batch dimension should be the same as for states
 Returns
states filtered by mask
 property output_types
Returns definitions of module output ports.
 predict(y: Optional[torch.Tensor] = None, state: Optional[torch.Tensor] = None, add_sos: bool = True, batch_size: Optional[int] = None) → Tuple[torch.Tensor, List[torch.Tensor]]
Stateful prediction of scores and state for a tokenset.
Here: B  batch size U  label length C  context size for stateless decoder D  total embedding size
 Parameters
y – Optional torch tensor of shape [B, U] of dtype long which will be passed to the Embedding. If None, creates a zero tensor of shape [B, 1, D] which mimics output of padtoken on Embedding.
state – An optional oneelement list of one tensor. The tensor is used to store previous context labels. The tensor uses type long and is of shape [B, C].
add_sos – bool flag, whether a zero vector describing a “start of signal” token should be prepended to the above “y” tensor. When set, output size is (B, U + 1, D).
batch_size – An optional int, specifying the batch size of the y tensor. Can be infered if y and state is None. But if both are None, then batch_size cannot be None.
 Returns
A tuple (g, state) such that 
If add_sos is False:
 g:
 state:
(B, U, D)
[(B, C)] storing the history context including the new words in y.
If add_sos is True:
 g:
 state:
(B, U + 1, D)
[(B, C)] storing the history context including the new words in y.
 score_hypothesis(hypothesis: nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis, cache: Dict[Tuple[int], Any]) → Tuple[torch.Tensor, List[torch.Tensor], torch.Tensor]
Similar to the predict() method, instead this method scores a Hypothesis during beam search. Hypothesis is a dataclass representing one hypothesis in a Beam Search.
 Parameters
hypothesis – Refer to rnnt_utils.Hypothesis.
cache – Dict which contains a cache to avoid duplicate computations.
 Returns
 Return type
y is a torch.Tensor of shape [1, 1, H] representing the score of the last token in the Hypothesis. state is a list of RNN states, each of shape [L, 1, H]. lm_token is the final integer token of the hypothesis.
Returns a tuple (y, states, lm_token) such that
 class nemo.collections.asr.modules.RNNTJoint(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.modules.rnnt_abstract.AbstractRNNTJoint
,nemo.core.classes.exportable.Exportable
,nemo.core.classes.mixins.adapter_mixins.AdapterModuleMixin
A Recurrent Neural Network Transducer Joint Network (RNNT Joint Network). An RNNT Joint network, comprised of a feedforward model.
 Parameters
jointnet –
A dictlike object which contains the following keyvalue pairs. encoder_hidden: int specifying the hidden dimension of the encoder net. pred_hidden: int specifying the hidden dimension of the prediction net. joint_hidden: int specifying the hidden dimension of the joint net activation: Activation function used in the joint step. Can be one of [‘relu’, ‘tanh’, ‘sigmoid’].
Optionally, it may also contain the following: dropout: float, set to 0.0 by default. Optional dropout applied at the end of the joint net.
num_classes – int, specifying the vocabulary size that the joint network must predict, excluding the RNNT blank token.
vocabulary – Optional list of strings/tokens that comprise the vocabulary of the joint network. Unused and kept only for easy access for character based encoding RNNT models.
log_softmax – Optional bool, set to None by default. If set as None, will compute the log_softmax() based on the value provided.
preserve_memory –
Optional bool, set to False by default. If the model crashes due to the memory intensive joint step, one might try this flag to empty the tensor cache in pytorch.
Warning: This will make the forwardbackward pass much slower than normal. It also might not fix the OOM if the GPU simply does not have enough memory to compute the joint.
fuse_loss_wer –
Optional bool, set to False by default.
Fuses the joint forward, loss forward and wer forward steps. In doing so, it trades of speed for memory conservation by creating subbatches of the provided batch of inputs, and performs Joint forward, loss forward and wer forward (optional), all on subbatches, then collates results to be exactly equal to results from the entire batch.
When this flag is set, prior to calling forward, the fields loss and wer (either one) must be set using the RNNTJoint.set_loss() or RNNTJoint.set_wer() methods.
Further, when this flag is set, the following argument fused_batch_size must be provided as a non negative integer. This value refers to the size of the subbatch.
When the flag is set, the input and output signature of forward() of this method changes. Input  in addition to encoder_outputs (mandatory argument), the following arguments can be provided.
decoder_outputs (optional). Required if loss computation is required.
encoder_lengths (required)
transcripts (optional). Required for wer calculation.
transcript_lengths (optional). Required for wer calculation.
compute_wer (bool, default false). Whether to compute WER or not for the fused batch.
Output  instead of the usual joint log prob tensor, the following results can be returned.
loss (optional). Returned if decoder_outputs, transcripts and transript_lengths are not None.

 wer_numerator + wer_denominator (optional). Returned if transcripts, transcripts_lengths are provided
and compute_wer is set.
fused_batch_size – Optional int, required if fuse_loss_wer flag is set. Determines the size of the subbatches. Should be any value below the actual batch size per GPU.
 add_adapter(name: str, cfg: omegaconf.DictConfig)
Add an Adapter module to this module.
 Parameters
name – A globally unique name for the adapter. Will be used to access, enable and disable adapters.
cfg – A DictConfig or Dataclass that contains at the bare minimum __target__ to instantiate a new Adapter module.
 property disabled_deployment_input_names
Implement this method to return a set of input names disabled for export
 input_example(max_batch=1, max_dim=8192)
Generates input examples for tracing etc. :returns: A tuple of input examples.
 property input_types
Returns definitions of module input ports.
 joint_after_projection(f: torch.Tensor, g: torch.Tensor) → torch.Tensor
Compute the joint step of the network after projection.
Here, B = Batch size T = Acoustic model timesteps U = Target sequence length H1, H2 = Hidden dimensions of the Encoder / Decoder respectively H = Hidden dimension of the Joint hidden step. V = Vocabulary size of the Decoder (excluding the RNNT blank token).
NoteThe implementation of this model is slightly modified from the original paper. The original paper proposes the following steps : (enc, dec) > Expand + Concat + Sum [B, T, U, H1+H2] > Forward through joint hidden [B, T, U, H] – *1 *1 > Forward through joint final [B, T, U, V + 1].
We instead split the joint hidden into joint_hidden_enc and joint_hidden_dec and act as follows: enc > Forward through joint_hidden_enc > Expand [B, T, 1, H] – *1 dec > Forward through joint_hidden_dec > Expand [B, 1, U, H] – *2 (*1, *2) > Sum [B, T, U, H] > Forward through joint final [B, T, U, V + 1].
 Parameters
f – Output of the Encoder model. A torch.Tensor of shape [B, T, H1]
g – Output of the Decoder model. A torch.Tensor of shape [B, U, H2]
 Returns
Logits / log softmaxed tensor of shape (B, T, U, V + 1).
 property output_types
Returns definitions of module output ports.
 project_encoder(encoder_output: torch.Tensor) → torch.Tensor
Project the encoder output to the joint hidden dimension.
 Parameters
 Returns
encoder_output – A torch.Tensor of shape [B, T, D]
A torch.Tensor of shape [B, T, H]
 project_prednet(prednet_output: torch.Tensor) → torch.Tensor
Project the Prediction Network (Decoder) output to the joint hidden dimension.
 Parameters
 Returns
prednet_output – A torch.Tensor of shape [B, U, D]
A torch.Tensor of shape [B, U, H]
 class nemo.collections.asr.modules.SampledRNNTJoint(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.modules.rnnt.RNNTJoint
A Sampled Recurrent Neural Network Transducer Joint Network (RNNT Joint Network). An RNNT Joint network, comprised of a feedforward model, where the vocab size will be sampled instead of computing the full vocabulary joint.
 Parameters
jointnet –
A dictlike object which contains the following keyvalue pairs. encoder_hidden: int specifying the hidden dimension of the encoder net. pred_hidden: int specifying the hidden dimension of the prediction net. joint_hidden: int specifying the hidden dimension of the joint net activation: Activation function used in the joint step. Can be one of [‘relu’, ‘tanh’, ‘sigmoid’].
Optionally, it may also contain the following: dropout: float, set to 0.0 by default. Optional dropout applied at the end of the joint net.
num_classes – int, specifying the vocabulary size that the joint network must predict, excluding the RNNT blank token.
n_samples – int, specifies the number of tokens to sample from the vocabulary space, excluding the RNNT blank token. If a given value is larger than the entire vocabulary size, then the full vocabulary will be used.
vocabulary – Optional list of strings/tokens that comprise the vocabulary of the joint network. Unused and kept only for easy access for character based encoding RNNT models.
log_softmax – Optional bool, set to None by default. If set as None, will compute the log_softmax() based on the value provided.
preserve_memory –
Optional bool, set to False by default. If the model crashes due to the memory intensive joint step, one might try this flag to empty the tensor cache in pytorch.
Warning: This will make the forwardbackward pass much slower than normal. It also might not fix the OOM if the GPU simply does not have enough memory to compute the joint.
fuse_loss_wer –
Optional bool, set to False by default.
Fuses the joint forward, loss forward and wer forward steps. In doing so, it trades of speed for memory conservation by creating subbatches of the provided batch of inputs, and performs Joint forward, loss forward and wer forward (optional), all on subbatches, then collates results to be exactly equal to results from the entire batch.
When this flag is set, prior to calling forward, the fields loss and wer (either one) must be set using the RNNTJoint.set_loss() or RNNTJoint.set_wer() methods.
Further, when this flag is set, the following argument fused_batch_size must be provided as a non negative integer. This value refers to the size of the subbatch.
When the flag is set, the input and output signature of forward() of this method changes. Input  in addition to encoder_outputs (mandatory argument), the following arguments can be provided.
decoder_outputs (optional). Required if loss computation is required.
encoder_lengths (required)
transcripts (optional). Required for wer calculation.
transcript_lengths (optional). Required for wer calculation.
compute_wer (bool, default false). Whether to compute WER or not for the fused batch.
Output  instead of the usual joint log prob tensor, the following results can be returned.
loss (optional). Returned if decoder_outputs, transcripts and transript_lengths are not None.

 wer_numerator + wer_denominator (optional). Returned if transcripts, transcripts_lengths are provided
and compute_wer is set.
fused_batch_size – Optional int, required if fuse_loss_wer flag is set. Determines the size of the subbatches. Should be any value below the actual batch size per GPU.
 sampled_joint(f: torch.Tensor, g: torch.Tensor, transcript: torch.Tensor, transcript_lengths: torch.Tensor) → torch.Tensor
Compute the sampled joint step of the network.
Reference: MemoryEfficient Training of RNNTransducer with Sampled Softmax.
Here, B = Batch size T = Acoustic model timesteps U = Target sequence length H1, H2 = Hidden dimensions of the Encoder / Decoder respectively H = Hidden dimension of the Joint hidden step. V = Vocabulary size of the Decoder (excluding the RNNT blank token). S = Sample size of vocabulary.
NoteThe implementation of this joint model is slightly modified from the original paper. The original paper proposes the following steps : (enc, dec) > Expand + Concat + Sum [B, T, U, H1+H2] > Forward through joint hidden [B, T, U, H] – *1 *1 > Forward through joint final [B, T, U, V + 1].
We instead split the joint hidden into joint_hidden_enc and joint_hidden_dec and act as follows: enc > Forward through joint_hidden_enc > Expand [B, T, 1, H] – *1 dec > Forward through joint_hidden_dec > Expand [B, 1, U, H] – *2 (*1, *2) > Sum [B, T, U, H] > Sample Vocab V_Pos (for target tokens) and V_Neg > (V_Neg is sampled not uniformly by as a rand permutation of all vocab tokens, then eliminate all Intersection(V_Pos, V_Neg) common tokens to avoid duplication of loss) > Concat new Vocab V_Sampled = Union(V_Pos, V_Neg) > Forward partially through the joint final to create [B, T, U, V_Sampled]
 Parameters
f – Output of the Encoder model. A torch.Tensor of shape [B, T, H1]
g – Output of the Decoder model. A torch.Tensor of shape [B, U, H2]
transcript – Batch of transcripts. A torch.Tensor of shape [B, U]
transcript_lengths – Batch of lengths of the transcripts. A torch.Tensor of shape [B]
 Returns
Logits / log softmaxed tensor of shape (B, T, U, V + 1).
 class nemo.collections.asr.parts.submodules.jasper.JasperBlock(*args: Any, **kwargs: Any)
Bases:
torch.nn.Module
,nemo.core.classes.mixins.adapter_mixins.AdapterModuleMixin
,nemo.core.classes.mixins.access_mixins.AccessMixin
Constructs a single “Jasper” block. With modified parameters, also constructs other blocks for models such as QuartzNet and Citrinet.
For Jasper : separable flag should be False
For QuartzNet : separable flag should be True
For Citrinet : separable flag and se flag should be True
Note that above are general distinctions, each model has intricate differences that expand over multiple such blocks.
For further information about the differences between models which use JasperBlock, please review the configs for ASR models found in the ASR examples directory.
 Parameters
inplanes – Number of input channels.
planes – Number of output channels.
repeat – Number of repeated subblocks (R) for this block.
kernel_size – Convolution kernel size across all repeated subblocks.
kernel_size_factor – Floating point scale value that is multiplied with kernel size, then rounded down to nearest odd integer to compose the kernel size. Defaults to 1.0.
stride – Stride of the convolutional layers.
dilation – Integer which defined dilation factor of kernel. Note that when dilation > 1, stride must be equal to 1.
padding – String representing type of padding. Currently only supports “same” padding, which symmetrically pads the input tensor with zeros.
dropout – Floating point value, determins percentage of output that is zeroed out.
activation – String representing activation functions. Valid activation functions are : {“hardtanh”: nn.Hardtanh, “relu”: nn.ReLU, “selu”: nn.SELU, “swish”: Swish}. Defaults to “relu”.
residual – Bool that determined whether a residual branch should be added or not. All residual branches are constructed using a pointwise convolution kernel, that may or may not perform strided convolution depending on the parameter residual_mode.
groups – Number of groups for Grouped Convolutions. Defaults to 1.
separable – Bool flag that describes whether TimeChannel depthwise separable convolution should be constructed, or ordinary convolution should be constructed.
heads – Number of “heads” for the masked convolution. Defaults to 1, which disables it.
normalization – String that represents type of normalization performed. Can be one of “batch”, “group”, “instance” or “layer” to compute BatchNorm1D, GroupNorm1D, InstanceNorm or LayerNorm (which are special cases of GroupNorm1D).
norm_groups – Number of groups used for GroupNorm (if normalization == “group”).
residual_mode – String argument which describes whether the residual branch should be simply added (“add”) or should first stride, then add (“stride_add”). Required when performing stride on parallel branch as well as utilizing residual add.
residual_panes – Number of residual panes, used for JasperDR models. Please refer to the paper.
conv_mask – Bool flag which determines whether to utilize masked convolutions or not. In general, it should be set to True.
se – Bool flag that determines whether SqueezeandExcitation layer should be used.
se_reduction_ratio – Integer value, which determines to what extend the hidden dimension of the SE intermediate step should be reduced. Larger values reduce number of parameters, but also limit the effectiveness of SE layers.
se_context_window – Integer value determining the number of timesteps that should be utilized in order to compute the averaged context window. Defaults to 1, which means it uses global context  such that all timesteps are averaged. If any positive integer is used, it will utilize limited context window of that size.
se_interpolation_mode – String used for interpolation mode of timestep dimension for SE blocks. Used only if context window is > 1. The modes available for resizing are: nearest, linear (3Donly), bilinear, area.
stride_last – Bool flag that determines whether all repeated blocks should stride at once, (stride of S^R when this flag is False) or just the last repeated block should stride (stride of S when this flag is True).
future_context –
Int value that determins how many “right” / “future” context frames will be utilized when calculating the output of the conv kernel. All calculations are done for odd kernel sizes only.
By default, this is 1, which is recomputed as the symmetric padding case.
When future_context >= 0, will compute the asymmetric padding as follows : (left context, right context) = [K  1  future_context, future_context]
Determining an exact formula to limit future context is dependent on global layout of the model. As such, we provide both “local” and “global” guidelines below.
Local context limit (should always be enforced)  future context should be <= half the kernel size for any given layer  future context > kernel size defaults to symmetric kernel  future context of layer = number of future frames * width of each frame (dependent on stride)
Global context limit (should be carefully considered)  future context should be layed out in an ever reducing pattern. Initial layers should restrict future context less than later layers, since shallow depth (and reduced stride) means each frame uses less amounts of future context.  Beyond a certain point, future context should remain static for a given stride level. This is the upper bound of the amount of future context that can be provided to the model on a global scale.  future context is calculated (roughly) as  (2 ^ stride) * (K // 2) number of future frames. This resultant value should be bound to some global maximum number of future seconds of audio (in ms).
Note: In the special case where K < future_context, it is assumed that the kernel is too small to limit its future context, so symmetric padding is used instead.
Note: There is no explicit limitation on the amount of future context used, as long as K > future_context constraint is maintained. This might lead to cases where future_context is more than half the actual kernel size K! In such cases, the conv layer is utilizing more of the future context than its current and past context to compute the output. While this is possible to do, it is not recommended and the layer will raise a warning to notify the user of such cases. It is advised to simply use symmetric padding for such cases.
Example: Say we have a model that performs 8x stride and receives spectrogram frames with stride of 0.01s. Say we wish to upper bound future context to 80 ms.
Layer ID, Kernel Size, Stride, Future Context, Global Context 0, K=5, S=1, FC=8, GC= 2 * (2^0) = 2 * 0.01 ms (special case, K < FC so use symmetric pad) 1, K=7, S=1, FC=3, GC= 3 * (2^0) = 3 * 0.01 ms (note that symmetric pad here uses 3 FC frames!) 2, K=11, S=2, FC=4, GC= 4 * (2^1) = 8 * 0.01 ms (note that symmetric pad here uses 5 FC frames!) 3, K=15, S=1, FC=4, GC= 4 * (2^1) = 8 * 0.01 ms (note that symmetric pad here uses 7 FC frames!) 4, K=21, S=2, FC=2, GC= 2 * (2^2) = 8 * 0.01 ms (note that symmetric pad here uses 10 FC frames!) 5, K=25, S=2, FC=1, GC= 1 * (2^3) = 8 * 0.01 ms (note that symmetric pad here uses 14 FC frames!) 6, K=29, S=1, FC=1, GC= 1 * (2^3) = 8 * 0.01 ms …
quantize – Bool flag whether to quantize the Convolutional blocks.
layer_idx (int, optional) – can be specified to allow layer output capture for InterCTC loss. Defaults to 1.
 forward(input_: Tuple[List[torch.Tensor], Optional[torch.Tensor]]) → Tuple[List[torch.Tensor], Optional[torch.Tensor]]
Forward pass of the module.
 Parameters
 Returns
input – The input is a tuple of two values  the preprocessed audio signal as well as the lengths of the audio signal. The audio signal is padded to the shape [B, D, T] and the lengths are a torch vector of length B.
The output of the block after processing the input through repeat number of subblocks, as well as the lengths of the encoded audio after padding/striding.
 class nemo.collections.asr.parts.mixins.mixins.ASRBPEMixin
Bases:
abc.ABC
ASR BPE Mixin class that sets up a Tokenizer via a config
This mixin class adds the method _setup_tokenizer(…), which can be used by ASR models which depend on subword tokenization.
 The setup_tokenizer method adds the following parameters to the class 
tokenizer_cfg: The resolved config supplied to the tokenizer (with dir and type arguments).
tokenizer_dir: The directory path to the tokenizer vocabulary + additional metadata.
tokenizer_type: The type of the tokenizer. Currently supports bpe and wpe, as well as agg.
vocab_path: Resolved path to the vocabulary text file.
In addition to these variables, the method will also instantiate and preserve a tokenizer (subclass of TokenizerSpec) if successful, and assign it to self.tokenizer.
The mixin also supports aggregate tokenizers, which consist of ordinary, monolingual tokenizers. If a conversion between a monolongual and an aggregate tokenizer (or vice versa) is detected, all registered artifacts will be cleaned up.
 save_tokenizers(directory: str)
Save the model tokenizer(s) to the specified directory.
 Parameters
directory – The directory to save the tokenizer(s) to.
 class nemo.collections.asr.parts.mixins.mixins.ASRModuleMixin
Bases:
nemo.collections.asr.parts.mixins.asr_adapter_mixins.ASRAdapterModelMixin
ASRModuleMixin is a mixin class added to ASR models in order to add methods that are specific to a particular instantiation of a module inside of an ASRModel.
Each method should first check that the module is present within the subclass, and support additional functionality if the corresponding module is present.
 change_attention_model(self_attention_model: Optional[str] = None, att_context_size: Optional[List[int]] = None, update_config: bool = True)
Update the self_attention_model if function is available in encoder.
 Parameters
self_attention_model (str) –
type of the attention layer and positional encoding
 ’rel_pos’:
 ’rel_pos_local_attn’:
 ’abs_pos’:
relative positional embedding and TransformerXL
relative positional embedding and TransformerXL with local attention using overlapping windows. Attention context is determined by att_context_size parameter.
absolute positional embedding and Transformer
If None is provided, the self_attention_model isn’t changed. Defauts to None.
att_context_size (List[int]) – List of 2 ints corresponding to left and right attention context sizes, or None to keep as it is. Defauts to None.
update_config (bool) – Whether to update the config or not with the new attention model. Defaults to True.
 change_conv_asr_se_context_window(context_window: int, update_config: bool = True)
Update the context window of the SqueezeExcitation module if the provided model contains an encoder which is an instance of ConvASREncoder.
 Parameters
context_window –
An integer representing the number of input timeframes that will be used to compute the context. Each timeframe corresponds to a single window stride of the STFT features.
Say the window_stride = 0.01s, then a context window of 128 represents 128 * 0.01 s of context to compute the Squeeze step.
update_config – Whether to update the config or not with the new context window.
 change_subsampling_conv_chunking_factor(subsampling_conv_chunking_factor: int, update_config: bool = True)
Update the conv_chunking_factor (int) if function is available in encoder. Default is 1 (auto) Set it to 1 (disabled) or to a specific value (power of 2) if you OOM in the conv subsampling layers
 Parameters
conv_chunking_factor (int) –
 conformer_stream_step(processed_signal: torch.Tensor, processed_signal_length: Optional[torch.Tensor] = None, cache_last_channel: Optional[torch.Tensor] = None, cache_last_time: Optional[torch.Tensor] = None, cache_last_channel_len: Optional[torch.Tensor] = None, keep_all_outputs: bool = True, previous_hypotheses: Optional[List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]] = None, previous_pred_out: Optional[torch.Tensor] = None, drop_extra_pre_encoded: Optional[int] = None, return_transcription: bool = True, return_log_probs: bool = False)
It simulates a forward step with caching for streaming purposes. It supports the ASR models where their encoder supports streaming like Conformer. :param processed_signal: the input audio signals :param processed_signal_length: the length of the audios :param cache_last_channel: the cache tensor for last channel layers like MHA :param cache_last_channel_len: engths for cache_last_channel :param cache_last_time: the cache tensor for last time layers like convolutions :param keep_all_outputs: if set to True, would not drop the extra outputs specified by encoder.streaming_cfg.valid_out_len :param previous_hypotheses: the hypotheses from the previous step for RNNT models :param previous_pred_out: the predicted outputs from the previous step for CTC models :param drop_extra_pre_encoded: number of steps to drop from the beginning of the outputs after the downsampling module. This can be used if extra paddings are added on the left side of the input. :param return_transcription: whether to decode and return the transcriptions. It can not get disabled for Transducer models. :param return_log_probs: whether to return the log probs, only valid for ctc model
 Returns
 Return type
the greedy predictions from the decoder all_hyp_or_transcribed_texts: the decoder hypotheses for Transducer models and the transcriptions for CTC models cache_last_channel_next: the updated tensor cache for last channel layers to be used for next streaming step cache_last_time_next: the updated tensor cache for last time layers to be used for next streaming step cache_last_channel_next_len: the updated lengths for cache_last_channel best_hyp: the best hypotheses for the Transducer models log_probs: the logits tensor of current streaming chunk, only returned when return_log_probs=True encoded_len: the length of the output log_probs + history chunk log_probs, only returned when return_log_probs=True
greedy_predictions
 transcribe_simulate_cache_aware_streaming(paths2audio_files: List[str], batch_size: int = 4, logprobs: bool = False, return_hypotheses: bool = False, online_normalization: bool = False)
 Parameters
paths2audio_files – (a list) of paths to audio files.
batch_size – (int) batch size to use during inference. Bigger will result in better throughput performance but would use more memory.
logprobs – (bool) pass True to get log probabilities instead of transcripts.
return_hypotheses – (bool) Either return hypotheses or text With hypotheses can do some postprocessing like getting timestamp or rescoring
online_normalization – (bool) Perform normalization on the run per chunk.
 Returns
A list of transcriptions (or raw log probabilities if logprobs is True) in the same order as paths2audio_files
 class nemo.collections.asr.parts.mixins.transcription.TranscriptionMixin
Bases:
abc.ABC
An abstract class for transcribeable models.
Creates a template function transcribe() that provides an interface to perform transcription of audio tensors or filepaths.
The following abstract classes must be implemented by the subclass:

 _transcribe_input_manifest_processing():
Process the provided input arguments (filepaths only) and return a config dict for the dataloader. The data loader is should generally operate on NeMo manifests.

 _setup_transcribe_dataloader():
Setup the dataloader for transcription. Receives the output from _transcribe_input_manifest_processing().

 _transcribe_forward():
Implements the model’s custom forward pass to return outputs that are processed by _transcribe_output_processing().

 _transcribe_output_processing():
Implements the post processing of the model’s outputs to return the results to the user. The result can be a list of objects, list of list of objects, tuple of objects, tuple of list of objects, or a dict of list of objects.
 transcribe(audio: Union[str, List[str], numpy.ndarray, torch.utils.data.DataLoader], batch_size: int = 4, return_hypotheses: bool = False, num_workers: int = 0, channel_selector: Optional[Union[int, Iterable[int], str]] = None, augmentor: Optional[omegaconf.DictConfig] = None, verbose: bool = True, override_config: Optional[nemo.collections.asr.parts.mixins.transcription.TranscribeConfig] = None, **config_kwargs) → Union[List[Any], List[List[Any]], Tuple[Any], Tuple[List[Any]], Dict[str, List[Any]]]
Template function that defines the execution strategy for transcribing audio.
 Parameters
audio – (a single or list) of paths to audio files or a np.ndarray audio array. Can also be a dataloader object that provides values that can be consumed by the model. Recommended length per file is between 5 and 25 seconds. But it is possible to pass a few hours long file if enough GPU memory is available.
batch_size – (int) batch size to use during inference. Bigger will result in better throughput performance but would use more memory.
return_hypotheses – (bool) Either return hypotheses or text With hypotheses can do some postprocessing like getting timestamp or rescoring
num_workers – (int) number of workers for DataLoader
channel_selector (int  Iterable[int]  str) – select a single channel or a subset of channels from multichannel audio. If set to ‘average’, it performs averaging across channels. Disabled if set to None. Defaults to None. Uses zerobased indexing.
augmentor – (DictConfig): Augment audio samples during transcription if augmentor is applied.
verbose – (bool) whether to display tqdm progress bar
override_config – (Optional[TranscribeConfig]) override transcription config predefined by the user. Note: All other arguments in the function will be ignored if override_config is passed. You should call this argument as model.transcribe(audio, override_config=TranscribeConfig(…)).
**config_kwargs – (Optional[Dict]) additional arguments to override the default TranscribeConfig. Note: If override_config is passed, these arguments will be ignored.
 Returns
List[str/Hypothesis]
List[List[str/Hypothesis]]
Tuple[str/Hypothesis]
Tuple[List[str/Hypothesis]]
Dict[str, List[str/Hypothesis]]
Output is defined by the subclass implementation of TranscriptionMixin._transcribe_output_processing(). It can be:
 transcribe_generator(audio, override_config: Optional[nemo.collections.asr.parts.mixins.transcription.TranscribeConfig])
A generator version of transcribe function.

 class nemo.collections.asr.parts.mixins.transcription.TranscribeConfig(batch_size: int = 4, return_hypotheses: bool = False, num_workers: Optional[int] = None, channel_selector: Union[int, Iterable[int], str] = None, augmentor: Optional[omegaconf.DictConfig] = None, verbose: bool = True, partial_hypothesis: Optional[List[Any]] = None, _internal: Optional[nemo.collections.asr.parts.mixins.transcription.InternalTranscribeConfig] = None)
Bases: object
 class nemo.collections.asr.parts.mixins.interctc_mixin.InterCTCMixin
Bases:
object
Adds utilities for computing interCTC loss from https://arxiv.org/abs/2102.03216.
To use, make sure encoder accesses
interctc['capture_layers']
property in the AccessMixin and registersinterctc/layer_output_X
andinterctc/layer_length_X
for all layers that we want to get loss from. Additionally, specify the following config parameters to set up loss:interctc: # can use different values loss_weights: [0.3] apply_at_layers: [8]
Then call
self.setup_interctc(ctc_decoder_name, ctc_loss_name, ctc_wer_name)
in the init methodself.add_interctc_losses
after computing regular loss.self.finalize_interctc_metrics(metrics, outputs, prefix="val_")
in the multi_validation_epoch_end method.self.finalize_interctc_metrics(metrics, outputs, prefix="test_")
in the multi_test_epoch_end method.
 add_interctc_losses(loss_value: torch.Tensor, transcript: torch.Tensor, transcript_len: torch.Tensor, compute_wer: bool, compute_loss: bool = True, log_wer_num_denom: bool = False, log_prefix: str = '') → Tuple[Optional[torch.Tensor], Dict]
Adding interCTC losses if required.
Will also register loss/wer metrics in the returned dictionary.
 Parameters
loss_value (torch.Tensor) – regular loss tensor (will add interCTC loss to it).
transcript (torch.Tensor) – current utterance transcript.
transcript_len (torch.Tensor) – current utterance transcript length.
compute_wer (bool) – whether to compute WER for the current utterance. Should typically be True for validation/test and only True for training if current batch WER should be logged.
compute_loss (bool) – whether to compute loss for the current utterance. Should always be True in training and almost always True in validation, unless all other losses are disabled as well. Defaults to True.
log_wer_num_denom (bool) – if True, will additionally log WER num/denom in the returned metrics dictionary. Should always be True for validation/test to allow correct metrics aggregation. Should always be False for training. Defaults to False.
log_prefix (str) – prefix added to all log values. Should be
""
for training and"val_"
for validation. Defaults to “”. Returns
 Return type
tuple of new loss tensor and dictionary with logged metrics.
tuple[Optional[torch.Tensor], Dict]
 finalize_interctc_metrics(metrics: Dict, outputs: List[Dict], prefix: str)
Finalizes InterCTC WER and loss metrics for logging purposes.
Should be called inside
multi_validation_epoch_end
(withprefix="val_"
) ormulti_test_epoch_end
(withprefix="test_"
).Note that
metrics
dictionary is going to be updated inplace. get_captured_interctc_tensors() → List[Tuple[torch.Tensor, torch.Tensor]]
Returns a list of captured tensors from encoder: tuples of (output, length).
Will additionally apply
ctc_decoder
to the outputs. get_interctc_param(param_name)
Either directly get parameter from
self._interctc_params
or call getattr with the corresponding name. is_interctc_enabled() → bool
Returns whether interCTC loss is enabled.
 set_interctc_enabled(enabled: bool)
Can be used to enable/disable InterCTC manually.
 set_interctc_param(param_name, param_value)
Setting the parameter to the
self._interctc_params
dictionary.Raises an error if trying to set decoder, loss or wer as those should always come from the main class.
 setup_interctc(decoder_name, loss_name, wer_name)
Sets up all interctcspecific parameters and checks config consistency.
Caller has to specify names of attributes to perform CTCspecific WER, decoder and loss computation. They will be looked up in the class state with
getattr
.The reason we get the names and look up object later is because those objects might change without recalling the setup of this class. So we always want to look up the most uptodate object instead of “caching” it here.
Character Encoding Datasets
 class nemo.collections.asr.data.audio_to_text.AudioToCharDataset(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.data.audio_to_text._AudioTextDataset
Dataset that loads tensors via a json file containing paths to audio files, transcripts, and durations (in seconds). Each new line is a different sample. Example below: {“audio_filepath”: “/path/to/audio.wav”, “text_filepath”: “/path/to/audio.txt”, “duration”: 23.147} … {“audio_filepath”: “/path/to/audio.wav”, “text”: “the transcription”, “offset”: 301.75, “duration”: 0.82, “utt”: “utterance_id”, “ctm_utt”: “en_4156”, “side”: “A”}
 Parameters
manifest_filepath – Path to manifest json as described above. Can be commaseparated paths.
labels – String containing all the possible characters to map to
sample_rate (int) – Sample rate to resample loaded audio to
int_values (bool) – If true, load samples as 32bit integers. Defauts to False.
augmentor (nemo.collections.asr.parts.perturb.AudioAugmentor) – An AudioAugmentor object used to augment loaded audio
max_duration – If audio exceeds this length, do not include in dataset
min_duration – If audio is less than this length, do not include in dataset
max_utts – Limit number of utterances
blank_index – blank character index, default = 1
unk_index – unk_character index, default = 1
normalize – whether to normalize transcript text (default): True
bos_id – Id of beginning of sequence symbol to append if not None
eos_id – Id of end of sequence symbol to append if not None
return_sample_id (bool) – whether to return the sample_id as a part of each sample
channel_selector (int  Iterable[int]  str) – select a single channel or a subset of channels from multichannel audio. If set to ‘average’, it performs averaging across channels. Disabled if set to None. Defaults to None. Uses zerobased indexing.
 property output_types: Optional[Dict[str, nemo.core.neural_types.neural_type.NeuralType]]
Returns definitions of module output ports.
 class nemo.collections.asr.data.audio_to_text.TarredAudioToCharDataset(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.data.audio_to_text._TarredAudioToTextDataset
A similar Dataset to the AudioToCharDataset, but which loads tarred audio files.
Accepts a single commaseparated JSON manifest file (in the same style as for the AudioToCharDataset), as well as the path(s) to the tarball(s) containing the wav files. Each line of the manifest should contain the information for one audio file, including at least the transcript and name of the audio file within the tarball.
Valid formats for the audio_tar_filepaths argument include: (1) a single string that can be braceexpanded, e.g. ‘path/to/audio.tar’ or ‘path/to/audio_{1..100}.tar.gz’, or (2) a list of file paths that will not be braceexpanded, e.g. [‘audio_1.tar’, ‘audio_2.tar’, …].
See the WebDataset documentation for more information about accepted data and input formats.
If using multiple workers the number of shards should be divisible by world_size to ensure an even split among workers. If it is not divisible, logging will give a warning but training will proceed. In addition, if using mutiprocessing, each shard MUST HAVE THE SAME NUMBER OF ENTRIES after filtering is applied. We currently do not check for this, but your program may hang if the shards are uneven!
Notice that a few arguments are different from the AudioToCharDataset; for example, shuffle (bool) has been replaced by shuffle_n (int).
Additionally, please note that the len() of this DataLayer is assumed to be the length of the manifest after filtering. An incorrect manifest length may lead to some DataLoader issues down the line.
 Parameters
audio_tar_filepaths – Either a list of audio tarball filepaths, or a string (can be braceexpandable).
manifest_filepath (str) – Path to the manifest.
labels (list) – List of characters that can be output by the ASR model. For Jasper, this is the 28 character set {az ‘}. The CTC blank symbol is automatically added later for models using ctc.
sample_rate (int) – Sample rate to resample loaded audio to
int_values (bool) – If true, load samples as 32bit integers. Defauts to False.
augmentor (nemo.collections.asr.parts.perturb.AudioAugmentor) – An AudioAugmentor object used to augment loaded audio
shuffle_n (int) – How many samples to look ahead and load to be shuffled. See WebDataset documentation for more details. Defaults to 0.
min_duration (float) – Dataset parameter. All training files which have a duration less than min_duration are dropped. Note: Duration is read from the manifest JSON. Defaults to 0.1.
max_duration (float) – Dataset parameter. All training files which have a duration more than max_duration are dropped. Note: Duration is read from the manifest JSON. Defaults to None.
blank_index (int) – Blank character index, defaults to 1.
unk_index (int) – Unknown character index, defaults to 1.
normalize (bool) – Dataset parameter. Whether to use automatic text cleaning. It is highly recommended to manually clean text for best results. Defaults to True.
trim (bool) – Whether to use trim silence from beginning and end of audio signal using librosa.effects.trim(). Defaults to False.
bos_id (id) – Dataset parameter. Beginning of string symbol id used for seq2seq models. Defaults to None.
eos_id (id) – Dataset parameter. End of string symbol id used for seq2seq models. Defaults to None.
pad_id (id) – Token used to pad when collating samples in batches. If this is None, pads using 0s. Defaults to None.
shard_strategy (str) –
Tarred dataset shard distribution strategy chosen as a str value during ddp.
scatter: The default shard strategy applied by WebDataset, where each node gets a unique set of shards, which are permanently preallocated and never changed at runtime.
replicate: Optional shard strategy, where each node gets all of the set of shards available in the tarred dataset, which are permanently preallocated and never changed at runtime. The benefit of replication is that it allows each node to sample data points from the entire dataset independently of other nodes, and reduces dependence on value of shuffle_n.
global_rank (int) – Worker rank, used for partitioning shards. Defaults to 0.
world_size (int) – Total number of processes, used for partitioning shards. Defaults to 0.
return_sample_id (bool) – whether to return the sample_id as a part of each sample
TexttoText Datasets for Hybrid ASRTTS models
 class nemo.collections.asr.data.text_to_text.TextToTextDataset(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.data.text_to_text.TextToTextDatasetBase
,nemo.core.classes.dataset.Dataset
TexttoText Mapstyle Dataset for hybrid ASRTTS models
 collate_fn(batch: List[Union[nemo.collections.asr.data.text_to_text.TextToTextItem, tuple]]) → Union[nemo.collections.asr.data.text_to_text.TextToTextBatch, nemo.collections.asr.data.text_to_text.TextOrAudioToTextBatch, tuple]
Collate function for dataloader Can accept mixed batch of texttotext items and audiotext items (typical for ASR)
 class nemo.collections.asr.data.text_to_text.TextToTextIterableDataset(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.data.text_to_text.TextToTextDatasetBase
,nemo.core.classes.dataset.IterableDataset
TexttoText Iterable Dataset for hybrid ASRTTS models Only part necessary for current process should be loaded and stored
 collate_fn(batch: List[Union[nemo.collections.asr.data.text_to_text.TextToTextItem, tuple]]) → Union[nemo.collections.asr.data.text_to_text.TextToTextBatch, nemo.collections.asr.data.text_to_text.TextOrAudioToTextBatch, tuple]
Collate function for dataloader Can accept mixed batch of texttotext items and audiotext items (typical for ASR)
Subword Encoding Datasets
 class nemo.collections.asr.data.audio_to_text.AudioToBPEDataset(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.data.audio_to_text._AudioTextDataset
Dataset that loads tensors via a json file containing paths to audio files, transcripts, and durations (in seconds). Each new line is a different sample. Example below: {“audio_filepath”: “/path/to/audio.wav”, “text_filepath”: “/path/to/audio.txt”, “duration”: 23.147} … {“audio_filepath”: “/path/to/audio.wav”, “text”: “the transcription”, “offset”: 301.75, “duration”: 0.82, “utt”: “utterance_id”, “ctm_utt”: “en_4156”, “side”: “A”}
In practice, the dataset and manifest used for character encoding and byte pair encoding are exactly the same. The only difference lies in how the dataset tokenizes the text in the manifest.
 Parameters
manifest_filepath – Path to manifest json as described above. Can be commaseparated paths.
tokenizer – A subclass of the Tokenizer wrapper found in the common collection, nemo.collections.common.tokenizers.TokenizerSpec. ASR Models support a subset of all available tokenizers.
sample_rate (int) – Sample rate to resample loaded audio to
int_values (bool) – If true, load samples as 32bit integers. Defauts to False.
augmentor (nemo.collections.asr.parts.perturb.AudioAugmentor) – An AudioAugmentor object used to augment loaded audio
max_duration – If audio exceeds this length, do not include in dataset
min_duration – If audio is less than this length, do not include in dataset
max_utts – Limit number of utterances
trim – Whether to trim silence segments
use_start_end_token – Boolean which dictates whether to add [BOS] and [EOS] tokens to beginning and ending of speech respectively.
return_sample_id (bool) – whether to return the sample_id as a part of each sample
channel_selector (int  Iterable[int]  str) – select a single channel or a subset of channels from multichannel audio. If set to ‘average’, it performs averaging across channels. Disabled if set to None. Defaults to None. Uses zerobased indexing.
 property output_types: Optional[Dict[str, nemo.core.neural_types.neural_type.NeuralType]]
Returns definitions of module output ports.
 class nemo.collections.asr.data.audio_to_text.TarredAudioToBPEDataset(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.data.audio_to_text._TarredAudioToTextDataset
A similar Dataset to the AudioToBPEDataset, but which loads tarred audio files.
Accepts a single commaseparated JSON manifest file (in the same style as for the AudioToBPEDataset), as well as the path(s) to the tarball(s) containing the wav files. Each line of the manifest should contain the information for one audio file, including at least the transcript and name of the audio file within the tarball.
Valid formats for the audio_tar_filepaths argument include: (1) a single string that can be braceexpanded, e.g. ‘path/to/audio.tar’ or ‘path/to/audio_{1..100}.tar.gz’, or (2) a list of file paths that will not be braceexpanded, e.g. [‘audio_1.tar’, ‘audio_2.tar’, …].
See the WebDataset documentation for more information about accepted data and input formats.
If using multiple workers the number of shards should be divisible by world_size to ensure an even split among workers. If it is not divisible, logging will give a warning but training will proceed. In addition, if using mutiprocessing, each shard MUST HAVE THE SAME NUMBER OF ENTRIES after filtering is applied. We currently do not check for this, but your program may hang if the shards are uneven!
Notice that a few arguments are different from the AudioToBPEDataset; for example, shuffle (bool) has been replaced by shuffle_n (int).
Additionally, please note that the len() of this DataLayer is assumed to be the length of the manifest after filtering. An incorrect manifest length may lead to some DataLoader issues down the line.
 Parameters
audio_tar_filepaths – Either a list of audio tarball filepaths, or a string (can be braceexpandable).
manifest_filepath (str) – Path to the manifest.
tokenizer (TokenizerSpec) – Either a Word Piece Encoding tokenizer (BERT), or a Sentence Piece Encoding tokenizer (BPE). The CTC blank symbol is automatically added later for models using ctc.
sample_rate (int) – Sample rate to resample loaded audio to
int_values (bool) – If true, load samples as 32bit integers. Defauts to False.
augmentor (nemo.collections.asr.parts.perturb.AudioAugmentor) – An AudioAugmentor object used to augment loaded audio
shuffle_n (int) – How many samples to look ahead and load to be shuffled. See WebDataset documentation for more details. Defaults to 0.
min_duration (float) – Dataset parameter. All training files which have a duration less than min_duration are dropped. Note: Duration is read from the manifest JSON. Defaults to 0.1.
max_duration (float) – Dataset parameter. All training files which have a duration more than max_duration are dropped. Note: Duration is read from the manifest JSON. Defaults to None.
trim (bool) – Whether to use trim silence from beginning and end of audio signal using librosa.effects.trim(). Defaults to False.
use_start_end_token – Boolean which dictates whether to add [BOS] and [EOS] tokens to beginning and ending of speech respectively.
pad_id (id) – Token used to pad when collating samples in batches. If this is None, pads using 0s. Defaults to None.
shard_strategy (str) –
Tarred dataset shard distribution strategy chosen as a str value during ddp.
scatter: The default shard strategy applied by WebDataset, where each node gets a unique set of shards, which are permanently preallocated and never changed at runtime.
replicate: Optional shard strategy, where each node gets all of the set of shards available in the tarred dataset, which are permanently preallocated and never changed at runtime. The benefit of replication is that it allows each node to sample data points from the entire dataset independently of other nodes, and reduces dependence on value of shuffle_n.
global_rank (int) – Worker rank, used for partitioning shards. Defaults to 0.
world_size (int) – Total number of processes, used for partitioning shards. Defaults to 0.
return_sample_id (bool) – whether to return the sample_id as a part of each sample
 class nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.modules.audio_preprocessing.AudioPreprocessor
,nemo.core.classes.exportable.Exportable
Featurizer module that converts wavs to mel spectrograms.
 Parameters
sample_rate (int) – Sample rate of the input audio data. Defaults to 16000
window_size (float) – Size of window for fft in seconds Defaults to 0.02
window_stride (float) – Stride of window for fft in seconds Defaults to 0.01
n_window_size (int) – Size of window for fft in samples Defaults to None. Use one of window_size or n_window_size.
n_window_stride (int) – Stride of window for fft in samples Defaults to None. Use one of window_stride or n_window_stride.
window (str) – Windowing function for fft. can be one of [‘hann’, ‘hamming’, ‘blackman’, ‘bartlett’] Defaults to “hann”
normalize (str) – Can be one of [‘per_feature’, ‘all_features’]; all other options disable feature normalization. ‘all_features’ normalizes the entire spectrogram to be mean 0 with std 1. ‘pre_features’ normalizes per channel / freq instead. Defaults to “per_feature”
n_fft (int) – Length of FT window. If None, it uses the smallest power of 2 that is larger than n_window_size. Defaults to None
preemph (float) – Amount of pre emphasis to add to audio. Can be disabled by passing None. Defaults to 0.97
features (int) – Number of mel spectrogram freq bins to output. Defaults to 64
lowfreq (int) – Lower bound on mel basis in Hz. Defaults to 0
highfreq (int) – Lower bound on mel basis in Hz. Defaults to None
log (bool) – Log features. Defaults to True
log_zero_guard_type (str) – Need to avoid taking the log of zero. There are two options: “add” or “clamp”. Defaults to “add”.
log_zero_guard_value (float, or str) – Add or clamp requires the number to add with or clamp to. log_zero_guard_value can either be a float or “tiny” or “eps”. torch.finfo is used if “tiny” or “eps” is passed. Defaults to 2**24.
dither (float) – Amount of whitenoise dithering. Defaults to 1e5
pad_to (int) – Ensures that the output size of the time dimension is a multiple of pad_to. Defaults to 16
frame_splicing (int) – Defaults to 1
exact_pad (bool) – If True, sets stft center to False and adds padding, such that num_frames = audio_length // hop_length. Defaults to False.
pad_value (float) – The value that shorter mels are padded with. Defaults to 0
mag_power (float) – The power that the linear spectrogram is raised to prior to multiplication with mel basis. Defaults to 2 for a power spec
rng – Random number generator
nb_augmentation_prob (float) – Probability with which narrowband augmentation would be applied to samples in the batch. Defaults to 0.0
nb_max_freq (int) – Frequency above which all frequencies will be masked for narrowband augmentation. Defaults to 4000
use_torchaudio – Whether to use the torchaudio implementation.
mel_norm – Normalization used for mel filterbank weights. Defaults to ‘slaney’ (area normalization)
stft_exact_pad – Deprecated argument, kept for compatibility with older checkpoints.
stft_conv – Deprecated argument, kept for compatibility with older checkpoints.
 input_example(max_batch: int = 8, max_dim: int = 32000, min_length: int = 200)
Override this method if random inputs won’t work :returns: A tuple sample of valid input data.
 property input_types
Returns definitions of module input ports.
 property output_types
Returns definitions of module output ports.
 processed_signal:
 processed_length:
0: AxisType(BatchTag) 1: AxisType(MelSpectrogramSignalTag) 2: AxisType(ProcessedTimeTag)
0: AxisType(BatchTag)
 classmethod restore_from(restore_path: str)
Restores model instance (weights and configuration) from a .nemo file
 Parameters
restore_path – path to .nemo file from which model should be instantiated
override_config_path – path to a yaml config that will override the internal config file or an OmegaConf / DictConfig object representing the model config.
map_location – Optional torch.device() to map the instantiated model to a device. By default (None), it will select a GPU if available, falling back to CPU otherwise.
strict – Passed to load_state_dict. By default True
return_config – If set to true, will return just the underlying config of the restored model as an OmegaConf DictConfig object without instantiating the model.
trainer – An optional Trainer object, passed to the model constructor.
save_restore_connector – An optional SaveRestoreConnector object that defines the implementation of the restore_from() method.
 save_to(save_path: str)
Standardized method to save a tarfile containing the checkpoint, config, and any additional artifacts. Implemented via
nemo.core.connectors.save_restore_connector.SaveRestoreConnector.save_to()
. Parameters
save_path – str, path to where the file should be saved.
 class nemo.collections.asr.modules.AudioToMFCCPreprocessor(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.modules.audio_preprocessing.AudioPreprocessor
Preprocessor that converts wavs to MFCCs. Uses torchaudio.transforms.MFCC.
 Parameters
sample_rate – The sample rate of the audio. Defaults to 16000.
window_size – Size of window for fft in seconds. Used to calculate the win_length arg for mel spectrogram. Defaults to 0.02
window_stride – Stride of window for fft in seconds. Used to caculate the hop_length arg for mel spect. Defaults to 0.01
n_window_size – Size of window for fft in samples Defaults to None. Use one of window_size or n_window_size.
n_window_stride – Stride of window for fft in samples Defaults to None. Use one of window_stride or n_window_stride.
window – Windowing function for fft. can be one of [‘hann’, ‘hamming’, ‘blackman’, ‘bartlett’, ‘none’, ‘null’]. Defaults to ‘hann’
n_fft – Length of FT window. If None, it uses the smallest power of 2 that is larger than n_window_size. Defaults to None
lowfreq (int) – Lower bound on mel basis in Hz. Defaults to 0
highfreq (int) – Lower bound on mel basis in Hz. Defaults to None
n_mels – Number of mel filterbanks. Defaults to 64
n_mfcc – Number of coefficients to retain Defaults to 64
dct_type – Type of discrete cosine transform to use
norm – Type of norm to use
log – Whether to use logmel spectrograms instead of dbscaled. Defaults to True.
 property input_types
Returns definitions of module input ports.
 property output_types
Returns definitions of module output ports.
 classmethod restore_from(restore_path: str)
Restores model instance (weights and configuration) from a .nemo file
 Parameters
restore_path – path to .nemo file from which model should be instantiated
override_config_path – path to a yaml config that will override the internal config file or an OmegaConf / DictConfig object representing the model config.
map_location – Optional torch.device() to map the instantiated model to a device. By default (None), it will select a GPU if available, falling back to CPU otherwise.
strict – Passed to load_state_dict. By default True
return_config – If set to true, will return just the underlying config of the restored model as an OmegaConf DictConfig object without instantiating the model.
trainer – An optional Trainer object, passed to the model constructor.
save_restore_connector – An optional SaveRestoreConnector object that defines the implementation of the restore_from() method.
 save_to(save_path: str)
Standardized method to save a tarfile containing the checkpoint, config, and any additional artifacts. Implemented via
nemo.core.connectors.save_restore_connector.SaveRestoreConnector.save_to()
. Parameters
save_path – str, path to where the file should be saved.
 class nemo.collections.asr.modules.SpectrogramAugmentation(*args: Any, **kwargs: Any)
Bases:
nemo.core.classes.module.NeuralModule
Performs time and freq cuts in one of two ways. SpecAugment zeroes out vertical and horizontal sections as described in SpecAugment (https://arxiv.org/abs/1904.08779). Arguments for use with SpecAugment are freq_masks, time_masks, freq_width, and time_width. SpecCutout zeroes out rectangulars as described in Cutout (https://arxiv.org/abs/1708.04552). Arguments for use with Cutout are rect_masks, rect_freq, and rect_time.
 Parameters
freq_masks (int) – how many frequency segments should be cut. Defaults to 0.
time_masks (int) – how many time segments should be cut Defaults to 0.
freq_width (int) – maximum number of frequencies to be cut in one segment. Defaults to 10.
time_width (int) – maximum number of time steps to be cut in one segment Defaults to 10.
rect_masks (int) – how many rectangular masks should be cut Defaults to 0.
rect_freq (int) – maximum size of cut rectangles along the frequency dimension Defaults to 5.
rect_time (int) – maximum size of cut rectangles along the time dimension Defaults to 25.
use_numba_spec_augment – use numba code for Spectrogram augmentation
use_vectorized_spec_augment – use vectorized code for Spectrogram augmentation
 property input_types
Returns definitions of module input types
 property output_types
Returns definitions of module output types
 class nemo.collections.asr.modules.CropOrPadSpectrogramAugmentation(*args: Any, **kwargs: Any)
Bases:
nemo.core.classes.module.NeuralModule
Pad or Crop the incoming Spectrogram to a certain shape.
 Parameters
audio_length (int) – the final number of timesteps that is required. The signal will be either padded or cropped temporally to this size.
 property input_types
Returns definitions of module output ports.
 property output_types
Returns definitions of module output ports.
 classmethod restore_from(restore_path: str)
Restores model instance (weights and configuration) from a .nemo file
 Parameters
restore_path – path to .nemo file from which model should be instantiated
override_config_path – path to a yaml config that will override the internal config file or an OmegaConf / DictConfig object representing the model config.
map_location – Optional torch.device() to map the instantiated model to a device. By default (None), it will select a GPU if available, falling back to CPU otherwise.
strict – Passed to load_state_dict. By default True
return_config – If set to true, will return just the underlying config of the restored model as an OmegaConf DictConfig object without instantiating the model.
trainer – An optional Trainer object, passed to the model constructor.
save_restore_connector – An optional SaveRestoreConnector object that defines the implementation of the restore_from() method.
 save_to(save_path: str)
Standardized method to save a tarfile containing the checkpoint, config, and any additional artifacts. Implemented via
nemo.core.connectors.save_restore_connector.SaveRestoreConnector.save_to()
. Parameters
save_path – str, path to where the file should be saved.
 class nemo.collections.asr.parts.preprocessing.perturb.SpeedPerturbation(sr, resample_type, min_speed_rate=0.9, max_speed_rate=1.1, num_rates=5, rng=None)
Bases:
nemo.collections.asr.parts.preprocessing.perturb.Perturbation
Performs Speed Augmentation by resampling the data to a different sampling rate, which does not preserve pitch.
Note: This is a very slow operation for online augmentation. If space allows, it is preferable to precompute and save the files to augment the dataset.
 Parameters
sr – Original sampling rate.
resample_type – Type of resampling operation that will be performed. For better speed using resampy’s fast resampling method, use resample_type=’kaiser_fast’. For highquality resampling, set resample_type=’kaiser_best’. To use scipy.signal.resample, set resample_type=’fft’ or resample_type=’scipy’
min_speed_rate – Minimum sampling rate modifier.
max_speed_rate – Maximum sampling rate modifier.
num_rates – Number of discrete rates to allow. Can be a positive or negative integer. If a positive integer greater than 0 is provided, the range of speed rates will be discretized into num_rates values. If a negative integer or 0 is provided, the full range of speed rates will be sampled uniformly. Note: If a positive integer is provided and the resultant discretized range of rates contains the value ‘1.0’, then those samples with rate=1.0, will not be augmented at all and simply skipped. This is to unnecessary augmentation and increase computation time. Effective augmentation chance in such a case is = prob * (num_rates  1 / num_rates) * 100`% chance where `prob is the global probability of a sample being augmented.
rng – Random seed. Default is None
 class nemo.collections.asr.parts.preprocessing.perturb.TimeStretchPerturbation(min_speed_rate=0.9, max_speed_rate=1.1, num_rates=5, n_fft=512, rng=None)
Bases:
nemo.collections.asr.parts.preprocessing.perturb.Perturbation
Timestretch an audio series by a fixed rate while preserving pitch, based on [1, 2].
Note: This is a simplified implementation, intended primarily for reference and pedagogical purposes. It makes no attempt to handle transients, and is likely to produce audible artifacts.
Reference [1] [Ellis, D. P. W. “A phase vocoder in Matlab.” Columbia University, 2002.] (http://www.ee.columbia.edu/~dpwe/resources/matlab/pvoc/) [2] [librosa.effects.time_stretch] (https://librosa.github.io/librosa/generated/librosa.effects.time_stretch.html)
 Parameters
min_speed_rate – Minimum sampling rate modifier.
max_speed_rate – Maximum sampling rate modifier.
num_rates – Number of discrete rates to allow. Can be a positive or negative integer. If a positive integer greater than 0 is provided, the range of speed rates will be discretized into num_rates values. If a negative integer or 0 is provided, the full range of speed rates will be sampled uniformly. Note: If a positive integer is provided and the resultant discretized range of rates contains the value ‘1.0’, then those samples with rate=1.0, will not be augmented at all and simply skipped. This is to avoid unnecessary augmentation and increase computation time. Effective augmentation chance in such a case is = prob * (num_rates  1 / num_rates) * 100`% chance where `prob is the global probability of a sample being augmented.
n_fft – Number of fft filters to be computed.
rng – Random seed. Default is None
 class nemo.collections.asr.parts.preprocessing.perturb.GainPerturbation(min_gain_dbfs= 10, max_gain_dbfs=10, rng=None)
Bases:
nemo.collections.asr.parts.preprocessing.perturb.Perturbation
Applies random gain to the audio.
 Parameters
min_gain_dbfs (float) – Min gain level in dB
max_gain_dbfs (float) – Max gain level in dB
rng (int) – Random seed. Default is None
 class nemo.collections.asr.parts.preprocessing.perturb.ImpulsePerturbation(manifest_path=None, audio_tar_filepaths=None, shuffle_n=128, normalize_impulse=False, shift_impulse=False, rng=None)
Bases:
nemo.collections.asr.parts.preprocessing.perturb.Perturbation
Convolves audio with a Room Impulse Response.
 Parameters
manifest_path (list) – Manifest file for RIRs
audio_tar_filepaths (list) – Tar files, if RIR audio files are tarred
shuffle_n (int) – Shuffle parameter for shuffling buffered files from the tar files
normalize_impulse (bool) – Normalize impulse response to zero mean and amplitude 1
shift_impulse (bool) – Shift impulse response to adjust for delay at the beginning
rng (int) – Random seed. Default is None
 class nemo.collections.asr.parts.preprocessing.perturb.ShiftPerturbation(min_shift_ms= 5.0, max_shift_ms=5.0, rng=None)
Bases:
nemo.collections.asr.parts.preprocessing.perturb.Perturbation
Perturbs audio by shifting the audio in time by a random amount between min_shift_ms and max_shift_ms. The final length of the audio is kept unaltered by padding the audio with zeros.
 Parameters
min_shift_ms (float) – Minimum time in milliseconds by which audio will be shifted
max_shift_ms (float) – Maximum time in milliseconds by which audio will be shifted
rng (int) – Random seed. Default is None
 class nemo.collections.asr.parts.preprocessing.perturb.NoisePerturbation(manifest_path=None, min_snr_db=10, max_snr_db=50, max_gain_db=300.0, rng=None, audio_tar_filepaths=None, shuffle_n=100, orig_sr=16000)
Bases:
nemo.collections.asr.parts.preprocessing.perturb.Perturbation
Perturbation that adds noise to input audio.
 Parameters
manifest_path (str) – Manifest file with paths to noise files
min_snr_db (float) – Minimum SNR of audio after noise is added
max_snr_db (float) – Maximum SNR of audio after noise is added
max_gain_db (float) – Maximum gain that can be applied on the noise sample
audio_tar_filepaths (list) – Tar files, if noise audio files are tarred
shuffle_n (int) – Shuffle parameter for shuffling buffered files from the tar files
orig_sr (int) – Original sampling rate of the noise files
rng (int) – Random seed. Default is None
 perturb(data, ref_mic=0)
 Parameters
data (AudioSegment) – audio data
ref_mic (int) – reference mic index for scaling multichannel audios
 perturb_with_foreground_noise(data, noise, data_rms=None, max_noise_dur=2, max_additions=1, ref_mic=0)
 Parameters
data (AudioSegment) – audio data
noise (AudioSegment) – noise data
data_rms (Union[float, List[float]) – rms_db for data input
max_noise_dur – (float): max noise duration
max_additions (int) – number of times for adding noise
ref_mic (int) – reference mic index for scaling multichannel audios
 perturb_with_input_noise(data, noise, data_rms=None, ref_mic=0)
 Parameters
data (AudioSegment) – audio data
noise (AudioSegment) – noise data
data_rms (Union[float, List[float]) – rms_db for data input
ref_mic (int) – reference mic index for scaling multichannel audios
 class nemo.collections.asr.parts.preprocessing.perturb.WhiteNoisePerturbation(min_level= 90, max_level= 46, rng=None)
Bases:
nemo.collections.asr.parts.preprocessing.perturb.Perturbation
Perturbation that adds white noise to an audio file in the training dataset.
 Parameters
min_level (int) – Minimum level in dB at which white noise should be added
max_level (int) – Maximum level in dB at which white noise should be added
rng (int) – Random seed. Default is None
 class nemo.collections.asr.parts.preprocessing.perturb.RirAndNoisePerturbation(rir_manifest_path=None, rir_prob=0.5, noise_manifest_paths=None, noise_prob=1.0, min_snr_db=0, max_snr_db=50, rir_tar_filepaths=None, rir_shuffle_n=100, noise_tar_filepaths=None, apply_noise_rir=False, orig_sample_rate=None, max_additions=5, max_duration=2.0, bg_noise_manifest_paths=None, bg_noise_prob=1.0, bg_min_snr_db=10, bg_max_snr_db=50, bg_noise_tar_filepaths=None, bg_orig_sample_rate=None, rng=None)
Bases:
nemo.collections.asr.parts.preprocessing.perturb.Perturbation
RIR augmentation with additive foreground and background noise. In this implementation audio data is augmented by first convolving the audio with a Room Impulse Response and then adding foreground noise and background noise at various SNRs. RIR, foreground and background noises should either be supplied with a manifest file or as tarred audio files (faster).
Different sets of noise audio files based on the original sampling rate of the noise. This is useful while training a mixed sample rate model. For example, when training a mixed model with 8 kHz and 16 kHz audio with a target sampling rate of 16 kHz, one would want to augment 8 kHz data with 8 kHz noise rather than 16 kHz noise.
 Parameters
rir_manifest_path – Manifest file for RIRs
rir_tar_filepaths – Tar files, if RIR audio files are tarred
rir_prob – Probability of applying a RIR
noise_manifest_paths – Foreground noise manifest path
min_snr_db – Min SNR for foreground noise
max_snr_db – Max SNR for background noise,
noise_tar_filepaths – Tar files, if noise files are tarred
apply_noise_rir – Whether to convolve foreground noise with a a random RIR
orig_sample_rate – Original sampling rate of foreground noise audio
max_additions – Max number of times foreground noise is added to an utterance,
max_duration – Max duration of foreground noise
bg_noise_manifest_paths – Background noise manifest path
bg_min_snr_db – Min SNR for background noise
bg_max_snr_db – Max SNR for background noise
bg_noise_tar_filepaths – Tar files, if noise files are tarred
bg_orig_sample_rate – Original sampling rate of background noise audio
rng – Random seed. Default is None
 class nemo.collections.asr.parts.preprocessing.perturb.TranscodePerturbation(codecs=None, rng=None)
Bases:
nemo.collections.asr.parts.preprocessing.perturb.Perturbation
Audio codec augmentation. This implementation uses sox to transcode audio with low rate audio codecs, so users need to make sure that the installed sox version supports the codecs used here (G711 and amrnb).
 Parameters
codecs (List[str]) – A list of codecs to be trancoded to. Default is None.
rng (int) – Random seed. Default is None.
CTC Decoding
 class nemo.collections.asr.parts.submodules.ctc_decoding.CTCDecoding(decoding_cfg, vocabulary)
Bases:
nemo.collections.asr.parts.submodules.ctc_decoding.AbstractCTCDecoding
Used for performing CTC autoregressive / nonautoregressive decoding of the logprobs for character based models.
 Parameters
decoding_cfg –
A dictlike object which contains the following keyvalue pairs.
 strategy:
greedy (for greedy decoding).
beam (for DeepSpeed KenLM based decoding).
 compute_timestamps:
 ctc_timestamp_type:
 word_seperator:
 preserve_alignments:
 confidence_cfg:
A dictlike object which contains the following keyvalue pairs related to confidence scores. In order to obtain hypotheses with confidence scores, please utilize ctc_decoder_predictions_tensor function with the preserve_frame_confidence flag set to True.
 preserve_frame_confidence:
 preserve_token_confidence:
 preserve_word_confidence:
 exclude_blank:
 aggregation:
 tdt_include_duration: Bool flag indicating that the duration confidence scores are to be calculated and
 method_cfg:
A dictlike object which contains the method name and settings to compute perframe confidence scores.
 name:
’max_prob’ for using the maximum token probability as a confidence.
’entropy’ for using a normalized entropy of a loglikelihood vector.
 entropy_type:
Which type of entropy to use (str). Used if confidence_method_cfg.name is set to entropy. Supported values:

 ’gibbs’ for the (standard) Gibbs entropy. If the alpha (α) is provided,
the formula is the following: H_α = sum_i((p^α_i)*log(p^α_i)). Note that for this entropy, the alpha should comply the following inequality: (log(V)+2sqrt(log^2(V)+4))/(2*log(V)) <= α <= (1+log(V1))/log(V1) where V is the model vocabulary size.

 ’tsallis’ for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_α = 1/(α1)*(1sum_i(p^α_i)), where α is a parameter. When α == 1, it works like the Gibbs entropy. More: https://en.wikipedia.org/wiki/Tsallis_entropy

 ’renyi’ for the Rényi entropy.
Rényi entropy formula is the following: H_α = 1/(1α)*log_2(sum_i(p^α_i)), where α is a parameter. When α == 1, it works like the Gibbs entropy. More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy

 alpha:
 entropy_norm:
’lin’ for using the linear mapping.
’exp’ for using exponential mapping with linear shift.
The method name (str). Supported values:
Power scale for logsoftmax (α for entropies). Here we restrict it to be > 0. When the alpha equals one, scaling is not applied to ‘max_prob’, and any entropy type behaves like the Shannon entropy: H = sum_i(p_i*log(p_i))
A mapping of the entropy value to the interval [0,1]. Supported values:
Bool flag which preserves the history of perframe confidence scores generated during decoding. When set to true, the Hypothesis will contain the nonnull value for frame_confidence in it. Here, frame_confidence is a List of floats.
Bool flag which preserves the history of pertoken confidence scores generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for token_confidence in it. Here, token_confidence is a List of floats.
The length of the list corresponds to the number of recognized tokens.
Bool flag which preserves the history of perword confidence scores generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for word_confidence in it. Here, word_confidence is a List of floats.
The length of the list corresponds to the number of recognized words.
Bool flag indicating that blank token confidence scores are to be excluded from the token_confidence.
Which aggregation type to use for collapsing pertoken confidence into perword confidence. Valid options are mean, min, max, prod.
attached to the regular frame confidence, making TDT frame confidence element a pair: (prediction_confidence, duration_confidence).
 batch_dim_index:
str value which represents the type of decoding that can occur. Possible values are :
A bool flag, which determines whether to compute the character/subword, or word based timestamp mapping the output logprobabilities to discrite intervals of timestamps. The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
A str value, which represents the types of timestamps that should be calculated. Can take the following values  “char” for character/subword time stamps, “word” for word level time stamps and “all” (default), for both character level and word level time stamps.
Str token representing the seperator between words.
Bool flag which preserves the history of logprobs generated during decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for logprobs in it. Here, logprobs is a torch.Tensors.
Index of the batch dimension of
targets
andpredictions
parameters ofctc_decoder_predictions_tensor
methods. Can be either 0 or 1.The config may further contain the following subdictionaries:
 ”greedy”:
 ”beam”:

 beam_size:
 return_best_hypothesis:
 beam_alpha:
 beam_beta:
 kenlm_path:
int, defining the beam size for beam search. Must be >= 1. If beam_size == 1, will perform cached greedy search. This might be slightly different results compared to the greedy search above.
optional bool, whether to return just the best hypothesis or all of the hypotheses after beam search has concluded. This flag is set by default.
float, the strength of the Language model on the final score of a token. final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
float, the strength of the sequence length penalty on the final score of a token. final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
str, path to a KenLM ARPA or .binary file (depending on the strategy chosen). If the path is invalid (file is not found at path), will raise a deferred error at the moment of calculation of beam search, so that users may update / change the decoding strategy to point to the correct file.
preserve_alignments: Same as above, overrides above value. compute_timestamps: Same as above, overrides above value. preserve_frame_confidence: Same as above, overrides above value. confidence_method_cfg: Same as above, overrides confidence_cfg.method_cfg.
blank_id – The id of the RNNT blank token.
 decode_ids_to_tokens(tokens: List[int]) → List[str]
Implemented by subclass in order to decode a token id list into a token list. A token list is the string representation of each token id.
 Parameters
 Returns
tokens – List of int representing the token ids.
A list of decoded tokens.
 decode_tokens_to_str(tokens: List[int]) → str
Implemented by subclass in order to decoder a token list into a string.
 Parameters
 Returns
tokens – List of int representing the token ids.
A decoded string.
 class nemo.collections.asr.parts.submodules.ctc_decoding.CTCBPEDecoding(decoding_cfg, tokenizer: nemo.collections.common.tokenizers.tokenizer_spec.TokenizerSpec)
Bases:
nemo.collections.asr.parts.submodules.ctc_decoding.AbstractCTCDecoding
Used for performing CTC autoregressive / nonautoregressive decoding of the logprobs for subword based models.
 Parameters
decoding_cfg –
A dictlike object which contains the following keyvalue pairs.
 strategy:
greedy (for greedy decoding).
beam (for DeepSpeed KenLM based decoding).
 compute_timestamps:
 ctc_timestamp_type:
 word_seperator:
 preserve_alignments:
 confidence_cfg:
A dictlike object which contains the following keyvalue pairs related to confidence scores. In order to obtain hypotheses with confidence scores, please utilize ctc_decoder_predictions_tensor function with the preserve_frame_confidence flag set to True.
 preserve_frame_confidence:
 preserve_token_confidence:
 preserve_word_confidence:
 exclude_blank:
 aggregation:
 tdt_include_duration: Bool flag indicating that the duration confidence scores are to be calculated and
 method_cfg:
A dictlike object which contains the method name and settings to compute perframe confidence scores.
 name:
’max_prob’ for using the maximum token probability as a confidence.
’entropy’ for using a normalized entropy of a loglikelihood vector.
 entropy_type:
Which type of entropy to use (str). Used if confidence_method_cfg.name is set to entropy. Supported values:

 ’gibbs’ for the (standard) Gibbs entropy. If the alpha (α) is provided,
the formula is the following: H_α = sum_i((p^α_i)*log(p^α_i)). Note that for this entropy, the alpha should comply the following inequality: (log(V)+2sqrt(log^2(V)+4))/(2*log(V)) <= α <= (1+log(V1))/log(V1) where V is the model vocabulary size.

 ’tsallis’ for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_α = 1/(α1)*(1sum_i(p^α_i)), where α is a parameter. When α == 1, it works like the Gibbs entropy. More: https://en.wikipedia.org/wiki/Tsallis_entropy

 ’renyi’ for the Rényi entropy.
Rényi entropy formula is the following: H_α = 1/(1α)*log_2(sum_i(p^α_i)), where α is a parameter. When α == 1, it works like the Gibbs entropy. More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy

 alpha:
 entropy_norm:
’lin’ for using the linear mapping.
’exp’ for using exponential mapping with linear shift.
The method name (str). Supported values:
Power scale for logsoftmax (α for entropies). Here we restrict it to be > 0. When the alpha equals one, scaling is not applied to ‘max_prob’, and any entropy type behaves like the Shannon entropy: H = sum_i(p_i*log(p_i))
A mapping of the entropy value to the interval [0,1]. Supported values:
Bool flag which preserves the history of perframe confidence scores generated during decoding. When set to true, the Hypothesis will contain the nonnull value for frame_confidence in it. Here, frame_confidence is a List of floats.
Bool flag which preserves the history of pertoken confidence scores generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for token_confidence in it. Here, token_confidence is a List of floats.
The length of the list corresponds to the number of recognized tokens.
Bool flag which preserves the history of perword confidence scores generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for word_confidence in it. Here, word_confidence is a List of floats.
The length of the list corresponds to the number of recognized words.
Bool flag indicating that blank token confidence scores are to be excluded from the token_confidence.
Which aggregation type to use for collapsing pertoken confidence into perword confidence. Valid options are mean, min, max, prod.
attached to the regular frame confidence, making TDT frame confidence element a pair: (prediction_confidence, duration_confidence).
 batch_dim_index:
str value which represents the type of decoding that can occur. Possible values are :
A bool flag, which determines whether to compute the character/subword, or word based timestamp mapping the output logprobabilities to discrite intervals of timestamps. The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
A str value, which represents the types of timestamps that should be calculated. Can take the following values  “char” for character/subword time stamps, “word” for word level time stamps and “all” (default), for both character level and word level time stamps.
Str token representing the seperator between words.
Bool flag which preserves the history of logprobs generated during decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for logprobs in it. Here, logprobs is a torch.Tensors.
Index of the batch dimension of
targets
andpredictions
parameters ofctc_decoder_predictions_tensor
methods. Can be either 0 or 1.The config may further contain the following subdictionaries:
 ”greedy”:
 ”beam”:

 beam_size:
 return_best_hypothesis:
 beam_alpha:
 beam_beta:
 kenlm_path:
int, defining the beam size for beam search. Must be >= 1. If beam_size == 1, will perform cached greedy search. This might be slightly different results compared to the greedy search above.
optional bool, whether to return just the best hypothesis or all of the hypotheses after beam search has concluded. This flag is set by default.
float, the strength of the Language model on the final score of a token. final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
float, the strength of the sequence length penalty on the final score of a token. final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
str, path to a KenLM ARPA or .binary file (depending on the strategy chosen). If the path is invalid (file is not found at path), will raise a deferred error at the moment of calculation of beam search, so that users may update / change the decoding strategy to point to the correct file.
preserve_alignments: Same as above, overrides above value. compute_timestamps: Same as above, overrides above value. preserve_frame_confidence: Same as above, overrides above value. confidence_method_cfg: Same as above, overrides confidence_cfg.method_cfg.
tokenizer – NeMo tokenizer object, which inherits from TokenizerSpec.
 decode_ids_to_tokens(tokens: List[int]) → List[str]
Implemented by subclass in order to decode a token id list into a token list. A token list is the string representation of each token id.
 Parameters
 Returns
tokens – List of int representing the token ids.
A list of decoded tokens.
 decode_tokens_to_str(tokens: List[int]) → str
Implemented by subclass in order to decoder a token list into a string.
 Parameters
 Returns
tokens – List of int representing the token ids.
A decoded string.
 class nemo.collections.asr.parts.submodules.ctc_greedy_decoding.GreedyCTCInfer(blank_id: int, preserve_alignments: bool = False, compute_timestamps: bool = False, preserve_frame_confidence: bool = False, confidence_method_cfg: Optional[omegaconf.DictConfig] = None)
Bases:
nemo.core.classes.common.Typing
,nemo.collections.asr.parts.utils.asr_confidence_utils.ConfidenceMethodMixin
A greedy CTC decoder.
Provides a common abstraction for sample level and batch level greedy decoding.
 Parameters
blank_index – int index of the blank token. Can be 0 or len(vocabulary).
preserve_alignments – Bool flag which preserves the history of logprobs generated during decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for logprobs in it. Here, logprobs is a torch.Tensors.
compute_timestamps – A bool flag, which determines whether to compute the character/subword, or word based timestamp mapping the output logprobabilities to discrite intervals of timestamps. The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
preserve_frame_confidence – Bool flag which preserves the history of perframe confidence scores generated during decoding. When set to true, the Hypothesis will contain the nonnull value for frame_confidence in it. Here, frame_confidence is a List of floats.
confidence_method_cfg –
A dictlike object which contains the method name and settings to compute perframe confidence scores.
 name: The method name (str).
 Supported values:
’max_prob’ for using the maximum token probability as a confidence.
’entropy’ for using a normalized entropy of a loglikelihood vector.
 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to entropy.
 Supported values:
 ’gibbs’ for the (standard) Gibbs entropy. If the alpha (α) is provided,
the formula is the following: H_α = sum_i((p^α_i)*log(p^α_i)). Note that for this entropy, the alpha should comply the following inequality: (log(V)+2sqrt(log^2(V)+4))/(2*log(V)) <= α <= (1+log(V1))/log(V1) where V is the model vocabulary size.
 ’tsallis’ for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_α = 1/(α1)*(1sum_i(p^α_i)), where α is a parameter. When α == 1, it works like the Gibbs entropy. More: https://en.wikipedia.org/wiki/Tsallis_entropy
 ’renyi’ for the Rényi entropy.
Rényi entropy formula is the following: H_α = 1/(1α)*log_2(sum_i(p^α_i)), where α is a parameter. When α == 1, it works like the Gibbs entropy. More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
 alpha: Power scale for logsoftmax (α for entropies). Here we restrict it to be > 0.
 entropy_norm: A mapping of the entropy value to the interval [0,1].
 Supported values:
’lin’ for using the linear mapping.
’exp’ for using exponential mapping with linear shift.
When the alpha equals one, scaling is not applied to ‘max_prob’, and any entropy type behaves like the Shannon entropy: H = sum_i(p_i*log(p_i))
 forward(decoder_output: torch.Tensor, decoder_lengths: torch.Tensor)
Returns a list of hypotheses given an input batch of the encoder hidden embedding. Output token is generated autorepressively.
 Parameters
decoder_output – A tensor of size (batch, timesteps, features) or (batch, timesteps) (each timestep is a label).
decoder_lengths – list of int representing the length of each sequence output sequence.
 Returns
packed list containing batch number of sentences (Hypotheses).
 property input_types
Returns definitions of module input ports.
 property output_types
Returns definitions of module output ports.
 class nemo.collections.asr.parts.submodules.ctc_beam_decoding.BeamCTCInfer(blank_id: int, beam_size: int, search_type: str = 'default', return_best_hypothesis: bool = True, preserve_alignments: bool = False, compute_timestamps: bool = False, beam_alpha: float = 1.0, beam_beta: float = 0.0, kenlm_path: Optional[str] = None, flashlight_cfg: Optional[nemo.collections.asr.parts.submodules.ctc_beam_decoding.FlashlightConfig] = None, pyctcdecode_cfg: Optional[nemo.collections.asr.parts.submodules.ctc_beam_decoding.PyCTCDecodeConfig] = None)
Bases:
nemo.collections.asr.parts.submodules.ctc_beam_decoding.AbstractBeamCTCInfer
A greedy CTC decoder.
Provides a common abstraction for sample level and batch level greedy decoding.
 Parameters
blank_index – int index of the blank token. Can be 0 or len(vocabulary).
preserve_alignments – Bool flag which preserves the history of logprobs generated during decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for logprobs in it. Here, logprobs is a torch.Tensors.
compute_timestamps – A bool flag, which determines whether to compute the character/subword, or word based timestamp mapping the output logprobabilities to discrite intervals of timestamps. The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
 default_beam_search(x: torch.Tensor, out_len: torch.Tensor) → List[Union[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis, nemo.collections.asr.parts.utils.rnnt_utils.NBestHypotheses]]
Open Seq2Seq Beam Search Algorithm (DeepSpeed)
 Parameters
x – Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length, and V is the vocabulary size. The tensor contains logprobabilities.
out_len – Tensor of shape [B], contains lengths of each sequence in the batch.
 Returns
A list of NBestHypotheses objects, one for each sequence in the batch.
 flashlight_beam_search(x: torch.Tensor, out_len: torch.Tensor) → List[Union[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis, nemo.collections.asr.parts.utils.rnnt_utils.NBestHypotheses]]
Flashlight Beam Search Algorithm. Should support Char and Subword models.
 Parameters
x – Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length, and V is the vocabulary size. The tensor contains logprobabilities.
out_len – Tensor of shape [B], contains lengths of each sequence in the batch.
 Returns
A list of NBestHypotheses objects, one for each sequence in the batch.
 forward(decoder_output: torch.Tensor, decoder_lengths: torch.Tensor) → Tuple[List[Union[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis, nemo.collections.asr.parts.utils.rnnt_utils.NBestHypotheses]]]
Returns a list of hypotheses given an input batch of the encoder hidden embedding. Output token is generated autorepressively.
 Parameters
decoder_output – A tensor of size (batch, timesteps, features).
decoder_lengths – list of int representing the length of each sequence output sequence.
 Returns
packed list containing batch number of sentences (Hypotheses).
 set_decoding_type(decoding_type: str)
Sets the decoding type of the framework. Can support either char or subword models.
 Parameters
decoding_type – Str corresponding to decoding type. Only supports “char” and “subword”.
RNNT Decoding
 class nemo.collections.asr.parts.submodules.rnnt_decoding.RNNTDecoding(decoding_cfg, decoder, joint, vocabulary)
Bases:
nemo.collections.asr.parts.submodules.rnnt_decoding.AbstractRNNTDecoding
Used for performing RNNT autoregressive decoding of the Decoder+Joint network given the encoder state.
 Parameters
decoding_cfg –
A dictlike object which contains the following keyvalue pairs.
 strategy:
greedy, greedy_batch (for greedy decoding).
beam, tsd, alsd (for beam search decoding).
 compute_hypothesis_token_set: A bool flag, which determines whether to compute a list of decoded
 preserve_alignments: Bool flag which preserves the history of logprobs generated during
 confidence_cfg: A dictlike object which contains the following keyvalue pairs related to confidence
scores. In order to obtain hypotheses with confidence scores, please utilize rnnt_decoder_predictions_tensor function with the preserve_frame_confidence flag set to True.
 preserve_frame_confidence: Bool flag which preserves the history of perframe confidence scores
 preserve_token_confidence: Bool flag which preserves the history of pertoken confidence scores
 preserve_word_confidence: Bool flag which preserves the history of perword confidence scores
 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
 aggregation: Which aggregation type to use for collapsing pertoken confidence into perword confidence.
 tdt_include_duration: Bool flag indicating that the duration confidence scores are to be calculated and
 method_cfg: A dictlike object which contains the method name and settings to compute perframe
confidence scores.
 name:
’max_prob’ for using the maximum token probability as a confidence.
’entropy’ for using a normalized entropy of a loglikelihood vector.
 entropy_type:
Which type of entropy to use (str). Used if confidence_method_cfg.name is set to entropy. Supported values:

 ’gibbs’ for the (standard) Gibbs entropy. If the alpha (α) is provided,
the formula is the following: H_α = sum_i((p^α_i)*log(p^α_i)). Note that for this entropy, the alpha should comply the following inequality: (log(V)+2sqrt(log^2(V)+4))/(2*log(V)) <= α <= (1+log(V1))/log(V1) where V is the model vocabulary size.

 ’tsallis’ for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_α = 1/(α1)*(1sum_i(p^α_i)), where α is a parameter. When α == 1, it works like the Gibbs entropy. More: https://en.wikipedia.org/wiki/Tsallis_entropy

 ’renyi’ for the Rényi entropy.
Rényi entropy formula is the following: H_α = 1/(1α)*log_2(sum_i(p^α_i)), where α is a parameter. When α == 1, it works like the Gibbs entropy. More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy

 alpha:
 entropy_norm:
’lin’ for using the linear mapping.
’exp’ for using exponential mapping with linear shift.
The method name (str). Supported values:
Power scale for logsoftmax (α for entropies). Here we restrict it to be > 0. When the alpha equals one, scaling is not applied to ‘max_prob’, and any entropy type behaves like the Shannon entropy: H = sum_i(p_i*log(p_i))
A mapping of the entropy value to the interval [0,1]. Supported values:
generated during decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for frame_confidence in it. Here, alignments is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T). Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores. U is the number of target tokens for the current timestep Ti.
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for token_confidence in it. Here, token_confidence is a List of floats.
The length of the list corresponds to the number of recognized tokens.
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for word_confidence in it. Here, word_confidence is a List of floats.
The length of the list corresponds to the number of recognized words.
from the token_confidence.
Valid options are mean, min, max, prod.
attached to the regular frame confidence, making TDT frame confidence element a pair: (prediction_confidence, duration_confidence).
str value which represents the type of decoding that can occur. Possible values are :
tokens as well as the decoded string. Default is False in order to avoid double decoding unless required.
decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for logprobs in it. Here, alignments is a List of List of Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
In order to obtain this hypothesis, please utilize rnnt_decoder_predictions_tensor function with the return_hypotheses flag set to True.
The length of the list corresponds to the Acoustic Length (T). Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary. U is the number of target tokens for the current timestep Ti.
The config may further contain the following subdictionaries:
 ”greedy”:

 max_symbols: int, describing the maximum number of target tokens to decode per
timestep during greedy decoding. Setting to larger values allows longer sentences to be decoded, at the cost of increased execution time.
preserve_frame_confidence: Same as above, overrides above value.
confidence_method_cfg: Same as above, overrides confidence_cfg.method_cfg.
 ”beam”:

 beam_size: int, defining the beam size for beam search. Must be >= 1.
 score_norm: optional bool, whether to normalize the returned beam score in the hypotheses.
 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
 tsd_max_sym_exp: optional int, determines number of symmetric expansions of the target symbols
 alsd_max_target_len: optional int or float, determines the potential maximum target sequence length.

If an integer is provided, it can decode sequences of that particular maximum length. If a float is provided, it can decode sequences of int(alsd_max_target_len * seq_len), where seq_len is the length of the acoustic model output (T).
 NOTE:
If a float is provided, it can be greater than 1! By default, a float of 2.0 is used so that a target sequence can be at most twice as long as the acoustic model output length T.
 maes_num_steps: Number of adaptive steps to take. From the paper, 2 steps is generally sufficient,
 maes_prefix_alpha: Maximum prefix length in prefix search. Must be an integer, and is advised to keep this as 1
 maes_expansion_beta: Maximum number of prefix expansions allowed, in addition to the beam size.
 maes_expansion_gamma: Float pruning threshold used in the prunebyvalue step when computing the expansions.
If beam_size == 1, will perform cached greedy search. This might be slightly different results compared to the greedy search above.
Set to True by default.
hypotheses after beam search has concluded. This flag is set by default.
per timestep of the acoustic model. Larger values will allow longer sentences to be decoded, at increased cost to execution time.
and can be reduced to 1 to improve decoding speed while sacrificing some accuracy. int > 0.
in order to reduce expensive beam search cost later. int >= 0.
Effectively, the number of hypothesis = beam_size + maes_expansion_beta. Must be an int >= 0, and affects the speed of inference since large values will perform large beam search in the next step.
The default (2.3) is selected from the paper. It performs a comparison (max_log_prob  gamma <= log_prob[v]) where v is all vocabulary indices in the Vocab set and max_log_prob is the “most” likely token to be predicted. Gamma therefore provides a margin of additional tokens which can be potential candidates for expansion apart from the “most likely” candidate. Lower values will reduce the number of expansions (by increasing pruningbyvalue, thereby improving speed but hurting accuracy). Higher values will increase the number of expansions (by reducing pruningbyvalue, thereby reducing speed but potentially improving accuracy). This is a hyper parameter to be experimentally tuned on a validation set.
softmax_temperature: Scales the logits of the joint prior to computing log_softmax.
decoder – The Decoder/Prediction network module.
joint – The Joint network module.
vocabulary – The vocabulary (excluding the RNNT blank token) which will be used for decoding.
 decode_ids_to_langs(tokens: List[int]) → List[str]
Decode a token id list into language ID (LID) list.
 Parameters
 Returns
tokens – List of int representing the token ids.
A list of decoded LIDS.
 decode_ids_to_tokens(tokens: List[int]) → List[str]
Implemented by subclass in order to decode a token id list into a token list. A token list is the string representation of each token id.
 Parameters
 Returns
tokens – List of int representing the token ids.
A list of decoded tokens.
 decode_tokens_to_lang(tokens: List[int]) → str
Compute the most likely language ID (LID) string given the tokens.
 Parameters
 Returns
tokens – List of int representing the token ids.
A decoded LID string.
 decode_tokens_to_str(tokens: List[int]) → str
Implemented by subclass in order to decoder a token list into a string.
 Parameters
 Returns
tokens – List of int representing the token ids.
A decoded string.
 class nemo.collections.asr.parts.submodules.rnnt_decoding.RNNTBPEDecoding(decoding_cfg, decoder, joint, tokenizer: nemo.collections.common.tokenizers.tokenizer_spec.TokenizerSpec)
Bases:
nemo.collections.asr.parts.submodules.rnnt_decoding.AbstractRNNTDecoding
Used for performing RNNT autoregressive decoding of the Decoder+Joint network given the encoder state.
 Parameters
decoding_cfg –
A dictlike object which contains the following keyvalue pairs.
 strategy:
greedy, greedy_batch (for greedy decoding).
beam, tsd, alsd (for beam search decoding).
 compute_hypothesis_token_set: A bool flag, which determines whether to compute a list of decoded
 preserve_alignments: Bool flag which preserves the history of logprobs generated during
 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
 compute_langs: a bool flag, which allows to compute language id (LID) information per token,
 rnnt_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
str value which represents the type of decoding that can occur. Possible values are :
tokens as well as the decoded string. Default is False in order to avoid double decoding unless required.
decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for alignments in it. Here, alignments is a List of List of Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
In order to obtain this hypothesis, please utilize rnnt_decoder_predictions_tensor function with the return_hypotheses flag set to True.
The length of the list corresponds to the Acoustic Length (T). Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary. U is the number of target tokens for the current timestep Ti.
word based timestamp mapping the output logprobabilities to discrete intervals of timestamps. The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
word, and the entire sample (most likely language id). The LIDS will be available in the returned Hypothesis object as a dictionary
Can take the following values  “char” for character/subword time stamps, “word” for word level time stamps and “all” (default), for both character level and word level time stamps.
word_seperator: Str token representing the seperator between words.
 preserve_frame_confidence: Bool flag which preserves the history of perframe confidence scores
 confidence_cfg: A dictlike object which contains the following keyvalue pairs related to confidence
scores. In order to obtain hypotheses with confidence scores, please utilize rnnt_decoder_predictions_tensor function with the preserve_frame_confidence flag set to True.
 preserve_frame_confidence: Bool flag which preserves the history of perframe confidence scores
 preserve_token_confidence: Bool flag which preserves the history of pertoken confidence scores
 preserve_word_confidence: Bool flag which preserves the history of perword confidence scores
 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
 aggregation: Which aggregation type to use for collapsing pertoken confidence into perword confidence.
 tdt_include_duration: Bool flag indicating that the duration confidence scores are to be calculated and
 method_cfg: A dictlike object which contains the method name and settings to compute perframe
confidence scores.
 name:
’max_prob’ for using the maximum token probability as a confidence.
’entropy’ for using a normalized entropy of a loglikelihood vector.
 entropy_type: Which type of entropy to use (str).
Used if confidence_method_cfg.name is set to entropy. Supported values:

 ’gibbs’ for the (standard) Gibbs entropy. If the alpha (α) is provided,
the formula is the following: H_α = sum_i((p^α_i)*log(p^α_i)). Note that for this entropy, the alpha should comply the following inequality: (log(V)+2sqrt(log^2(V)+4))/(2*log(V)) <= α <= (1+log(V1))/log(V1) where V is the model vocabulary size.

 ’tsallis’ for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_α = 1/(α1)*(1sum_i(p^α_i)), where α is a parameter. When α == 1, it works like the Gibbs entropy. More: https://en.wikipedia.org/wiki/Tsallis_entropy

 ’renyi’ for the Rényi entropy.
Rényi entropy formula is the following: H_α = 1/(1α)*log_2(sum_i(p^α_i)), where α is a parameter. When α == 1, it works like the Gibbs entropy. More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy

 alpha: Power scale for logsoftmax (α for entropies). Here we restrict it to be > 0.
 entropy_norm: A mapping of the entropy value to the interval [0,1].
’lin’ for using the linear mapping.
’exp’ for using exponential mapping with linear shift.
The method name (str). Supported values:
When the alpha equals one, scaling is not applied to ‘max_prob’, and any entropy type behaves like the Shannon entropy: H = sum_i(p_i*log(p_i))
Supported values:
generated during decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for frame_confidence in it. Here, alignments is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T). Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores. U is the number of target tokens for the current timestep Ti.
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for token_confidence in it. Here, token_confidence is a List of floats.
The length of the list corresponds to the number of recognized tokens.
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for word_confidence in it. Here, word_confidence is a List of floats.
The length of the list corresponds to the number of recognized words.
from the token_confidence.
Valid options are mean, min, max, prod.
attached to the regular frame confidence, making TDT frame confidence element a pair: (prediction_confidence, duration_confidence).
generated during decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for frame_confidence in it. Here, alignments is a List of List of ints.
The config may further contain the following subdictionaries:
 ”greedy”:

 max_symbols: int, describing the maximum number of target tokens to decode per
timestep during greedy decoding. Setting to larger values allows longer sentences to be decoded, at the cost of increased execution time.
preserve_frame_confidence: Same as above, overrides above value.
confidence_method_cfg: Same as above, overrides confidence_cfg.method_cfg.
 ”beam”:

 beam_size: int, defining the beam size for beam search. Must be >= 1.
 score_norm: optional bool, whether to normalize the returned beam score in the hypotheses.
 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
 tsd_max_sym_exp: optional int, determines number of symmetric expansions of the target symbols
 alsd_max_target_len: optional int or float, determines the potential maximum target sequence length.

If an integer is provided, it can decode sequences of that particular maximum length. If a float is provided, it can decode sequences of int(alsd_max_target_len * seq_len), where seq_len is the length of the acoustic model output (T).
 NOTE:
If a float is provided, it can be greater than 1! By default, a float of 2.0 is used so that a target sequence can be at most twice as long as the acoustic model output length T.
 maes_num_steps: Number of adaptive steps to take. From the paper, 2 steps is generally sufficient,
 maes_prefix_alpha: Maximum prefix length in prefix search. Must be an integer, and is advised to keep this as 1
 maes_expansion_beta: Maximum number of prefix expansions allowed, in addition to the beam size.
 maes_expansion_gamma: Float pruning threshold used in the prunebyvalue step when computing the expansions.
If beam_size == 1, will perform cached greedy search. This might be slightly different results compared to the greedy search above.
Set to True by default.
hypotheses after beam search has concluded.
per timestep of the acoustic model. Larger values will allow longer sentences to be decoded, at increased cost to execution time.
and can be reduced to 1 to improve decoding speed while sacrificing some accuracy. int > 0.
in order to reduce expensive beam search cost later. int >= 0.
Effectively, the number of hypothesis = beam_size + maes_expansion_beta. Must be an int >= 0, and affects the speed of inference since large values will perform large beam search in the next step.
The default (2.3) is selected from the paper. It performs a comparison (max_log_prob  gamma <= log_prob[v]) where v is all vocabulary indices in the Vocab set and max_log_prob is the “most” likely token to be predicted. Gamma therefore provides a margin of additional tokens which can be potential candidates for expansion apart from the “most likely” candidate. Lower values will reduce the number of expansions (by increasing pruningbyvalue, thereby improving speed but hurting accuracy). Higher values will increase the number of expansions (by reducing pruningbyvalue, thereby reducing speed but potentially improving accuracy). This is a hyper parameter to be experimentally tuned on a validation set.
softmax_temperature: Scales the logits of the joint prior to computing log_softmax.
decoder – The Decoder/Prediction network module.
joint – The Joint network module.
tokenizer – The tokenizer which will be used for decoding.
 decode_hypothesis(hypotheses_list: List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]) → List[Union[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis, nemo.collections.asr.parts.utils.rnnt_utils.NBestHypotheses]]
Decode a list of hypotheses into a list of strings. Overrides the super() method optionally adding lang information
 Parameters
 Returns
hypotheses_list – List of Hypothesis.
A list of strings.
 decode_ids_to_langs(tokens: List[int]) → List[str]
Decode a token id list into language ID (LID) list.
 Parameters
 Returns
tokens – List of int representing the token ids.
A list of decoded LIDS.
 decode_ids_to_tokens(tokens: List[int]) → List[str]
Implemented by subclass in order to decode a token id list into a token list. A token list is the string representation of each token id.
 Parameters
 Returns
tokens – List of int representing the token ids.
A list of decoded tokens.
 decode_tokens_to_lang(tokens: List[int]) → str
Compute the most likely language ID (LID) string given the tokens.
 Parameters
 Returns
tokens – List of int representing the token ids.
A decoded LID string.
 decode_tokens_to_str(tokens: List[int]) → str
Implemented by subclass in order to decoder a token list into a string.
 Parameters
 Returns
tokens – List of int representing the token ids.
A decoded string.
 class nemo.collections.asr.parts.submodules.rnnt_greedy_decoding.GreedyRNNTInfer(decoder_model: nemo.collections.asr.modules.rnnt_abstract.AbstractRNNTDecoder, joint_model: nemo.collections.asr.modules.rnnt_abstract.AbstractRNNTJoint, blank_index: int, max_symbols_per_step: Optional[int] = None, preserve_alignments: bool = False, preserve_frame_confidence: bool = False, confidence_method_cfg: Optional[omegaconf.DictConfig] = None)
Bases:
nemo.collections.asr.parts.submodules.rnnt_greedy_decoding._GreedyRNNTInfer
A greedy transducer decoder.
Sequence level greedy decoding, performed autoregressively.
 Parameters
decoder_model – rnnt_utils.AbstractRNNTDecoder implementation.
joint_model – rnnt_utils.AbstractRNNTJoint implementation.
blank_index – int index of the blank token. Can be 0 or len(vocabulary).
max_symbols_per_step – Optional int. The maximum number of symbols that can be added to a sequence in a single time step; if set to None then there is no limit.
preserve_alignments –
Bool flag which preserves the history of alignments generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for alignments in it. Here, alignments is a List of List of Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
The length of the list corresponds to the Acoustic Length (T). Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary. U is the number of target tokens for the current timestep Ti.
preserve_frame_confidence –
Bool flag which preserves the history of perframe confidence scores generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for frame_confidence in it. Here, frame_confidence is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T). Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores. U is the number of target tokens for the current timestep Ti.
confidence_method_cfg –
A dictlike object which contains the method name and settings to compute perframe confidence scores.
 name: The method name (str).
 Supported values:
’max_prob’ for using the maximum token probability as a confidence.
’entropy’ for using a normalized entropy of a loglikelihood vector.
 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to entropy.
 Supported values:
 ’gibbs’ for the (standard) Gibbs entropy. If the alpha (α) is provided,
the formula is the following: H_α = sum_i((p^α_i)*log(p^α_i)). Note that for this entropy, the alpha should comply the following inequality: (log(V)+2sqrt(log^2(V)+4))/(2*log(V)) <= α <= (1+log(V1))/log(V1) where V is the model vocabulary size.
 ’tsallis’ for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_α = 1/(α1)*(1sum_i(p^α_i)), where α is a parameter. When α == 1, it works like the Gibbs entropy. More: https://en.wikipedia.org/wiki/Tsallis_entropy
 ’renyi’ for the Rényi entropy.
Rényi entropy formula is the following: H_α = 1/(1α)*log_2(sum_i(p^α_i)), where α is a parameter. When α == 1, it works like the Gibbs entropy. More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
 alpha: Power scale for logsoftmax (α for entropies). Here we restrict it to be > 0.
 entropy_norm: A mapping of the entropy value to the interval [0,1].
 Supported values:
’lin’ for using the linear mapping.
’exp’ for using exponential mapping with linear shift.
When the alpha equals one, scaling is not applied to ‘max_prob’, and any entropy type behaves like the Shannon entropy: H = sum_i(p_i*log(p_i))
 forward(encoder_output: torch.Tensor, encoded_lengths: torch.Tensor, partial_hypotheses: Optional[List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]] = None)
Returns a list of hypotheses given an input batch of the encoder hidden embedding. Output token is generated autoregressively.
 Parameters
encoder_output – A tensor of size (batch, features, timesteps).
encoded_lengths – list of int representing the length of each sequence output sequence.
 Returns
packed list containing batch number of sentences (Hypotheses).
 class nemo.collections.asr.parts.submodules.rnnt_greedy_decoding.GreedyBatchedRNNTInfer(decoder_model: nemo.collections.asr.modules.rnnt_abstract.AbstractRNNTDecoder, joint_model: nemo.collections.asr.modules.rnnt_abstract.AbstractRNNTJoint, blank_index: int, max_symbols_per_step: Optional[int] = None, preserve_alignments: bool = False, preserve_frame_confidence: bool = False, confidence_method_cfg: Optional[omegaconf.DictConfig] = None, loop_labels: bool = True, use_cuda_graph_decoder: bool = False)
Bases:
nemo.collections.asr.parts.submodules.rnnt_greedy_decoding._GreedyRNNTInfer
,nemo.collections.common.parts.optional_cuda_graphs.WithOptionalCudaGraphs
A batch level greedy transducer decoder.
Batch level greedy decoding, performed autoregressively.
 Parameters
decoder_model – rnnt_utils.AbstractRNNTDecoder implementation.
joint_model – rnnt_utils.AbstractRNNTJoint implementation.
blank_index – int index of the blank token. Can be 0 or len(vocabulary).
max_symbols_per_step – Optional int. The maximum number of symbols that can be added to a sequence in a single time step; if set to None then there is no limit.
preserve_alignments –
Bool flag which preserves the history of alignments generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for alignments in it. Here, alignments is a List of List of Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
The length of the list corresponds to the Acoustic Length (T). Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary. U is the number of target tokens for the current timestep Ti.
preserve_frame_confidence –
Bool flag which preserves the history of perframe confidence scores generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain the nonnull value for frame_confidence in it. Here, frame_confidence is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T). Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores. U is the number of target tokens for the current timestep Ti.
confidence_method_cfg –
A dictlike object which contains the method name and settings to compute perframe confidence scores.
 name: The method name (str).
 Supported values:
’max_prob’ for using the maximum token probability as a confidence.
’entropy’ for using a normalized entropy of a loglikelihood vector.
 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to entropy.
 Supported values:
 ’gibbs’ for the (standard) Gibbs entropy. If the alpha (α) is provided,
the formula is the following: H_α = sum_i((p^α_i)*log(p^α_i)). Note that for this entropy, the alpha should comply the following inequality: (log(V)+2sqrt(log^2(V)+4))/(2*log(V)) <= α <= (1+log(V1))/log(V1) where V is the model vocabulary size.
 ’tsallis’ for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_α = 1/(α1)*(1sum_i(p^α_i)), where α is a parameter. When α == 1, it works like the Gibbs entropy. More: https://en.wikipedia.org/wiki/Tsallis_entropy
 ’renyi’ for the Rényi entropy.
Rényi entropy formula is the following: H_α = 1/(1α)*log_2(sum_i(p^α_i)), where α is a parameter. When α == 1, it works like the Gibbs entropy. More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
 alpha: Power scale for logsoftmax (α for entropies). Here we restrict it to be > 0.
 entropy_norm: A mapping of the entropy value to the interval [0,1].
 Supported values:
’lin’ for using the linear mapping.
’exp’ for using exponential mapping with linear shift.
When the alpha equals one, scaling is not applied to ‘max_prob’, and any entropy type behaves like the Shannon entropy: H = sum_i(p_i*log(p_i))
loop_labels – Switching between decoding algorithms. Both algorithms produce equivalent results. loop_labels=True (default) algorithm is faster (especially for large batches) but can use a bit more memory (negligible overhead compared to the amount of memory used by the encoder). loop_labels=False is an implementation of a traditional decoding algorithm, which iterates over frames (encoder output vectors), and in the inner loop, decodes labels for the current frame one by one, stopping when <blank> is found. loop_labels=True iterates over labels, on each step finding the next nonblank label (evaluating Joint multiple times in inner loop); It uses a minimal possible amount of calls to prediction network (with maximum possible batch size), which makes it especially useful for scaling the prediction network.
use_cuda_graph_decoder – if CUDA graphs should be enabled for decoding (currently recommended only for inference)
 disable_cuda_graphs()
Disable CUDA graphs (e.g., for decoding in training)
 forward(encoder_output: torch.Tensor, encoded_lengths: torch.Tensor, partial_hypotheses: Optional[List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]] = None)
Returns a list of hypotheses given an input batch of the encoder hidden embedding. Output token is generated autoregressively.
 Parameters
encoder_output – A tensor of size (batch, features, timesteps).
encoded_lengths – list of int representing the length of each sequence output sequence.
 Returns
packed list containing batch number of sentences (Hypotheses).
 maybe_enable_cuda_graphs()
Enable CUDA graphs (if allowed)
 class nemo.collections.asr.parts.submodules.rnnt_beam_decoding.BeamRNNTInfer(decoder_model: nemo.collections.asr.modules.rnnt_abstract.AbstractRNNTDecoder, joint_model: nemo.collections.asr.modules.rnnt_abstract.AbstractRNNTJoint, beam_size: int, search_type: str = 'default', score_norm: bool = True, return_best_hypothesis: bool = True, tsd_max_sym_exp_per_step: Optional[int] = 50, alsd_max_target_len: Union[int, float] = 1.0, nsc_max_timesteps_expansion: int = 1, nsc_prefix_alpha: int = 1, maes_num_steps: int = 2, maes_prefix_alpha: int = 1, maes_expansion_gamma: float = 2.3, maes_expansion_beta: int = 2, language_model: Optional[Dict[str, Any]] = None, softmax_temperature: float = 1.0, preserve_alignments: bool = False, ngram_lm_model: Optional[str] = None, ngram_lm_alpha: float = 0.0, hat_subtract_ilm: bool = False, hat_ilm_weight: float = 0.0)
Bases:
nemo.core.classes.common.Typing
Beam Search implementation ported from ESPNet implementation  https://github.com/espnet/espnet/blob/master/espnet/nets/beam_search_transducer.py
Sequence level beam decoding or batchedbeam decoding, performed autorepressively depending on the search type chosen.
 Parameters
decoder_model – rnnt_utils.AbstractRNNTDecoder implementation.
joint_model – rnnt_utils.AbstractRNNTJoint implementation.
beam_size –
number of beams for beam search. Must be a positive integer >= 1. If beam size is 1, defaults to stateful greedy search. This greedy search might result in slightly different results than the greedy results obtained by GreedyRNNTInfer due to implementation differences.
For accurate greedy results, please use GreedyRNNTInfer or GreedyBatchedRNNTInfer.
search_type (# The following arguments are specific to the chosen) –
str representing the type of beam search to perform. Must be one of [‘beam’, ‘tsd’, ‘alsd’]. ‘nsc’ is currently not supported.
Algoritm used:
 beam  basic beam search strategy. Larger beams generally result in better decoding,
 tsd  time synchronous decoding. Please refer to the paper:
 alsd  alignmentlength synchronous decoding. Please refer to the paper:
 maes = modified adaptive expansion searcn. Please refer to the paper:
however the time required for the search also grows steadily.
[AlignmentLength Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented.
Time synchronous decoding (TSD) execution time grows by the factor T * max_symmetric_expansions. For longer sequences, T is greater, and can therefore take a long time for beams to obtain good results. This also requires greater memory to execute.
[AlignmentLength Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented.
Alignmentlength synchronous decoding (ALSD) execution time is faster than TSD, with growth factor of T + U_max, where U_max is the maximum target length expected during execution.
Generally, T + U_max < T * max_symmetric_expansions. However, ALSD beams are nonunique, therefore it is required to use larger beam sizes to achieve the same (or close to the same) decoding accuracy as TSD.
For a given decoding accuracy, it is possible to attain faster decoding via ALSD than TSD.
[Accelerating RNN Transducer Inference via Adaptive Expansion Search](https://ieeexplore.ieee.org/document/9250505)
Modified Adaptive Synchronous Decoding (mAES) execution time is adaptive w.r.t the number of expansions (for tokens) required per timestep. The number of expansions can usually be constrained to 1 or 2, and in most cases 2 is sufficient.
This beam search technique can possibly obtain superior WER while sacrificing some evaluation time.
score_norm – bool, whether to normalize the scores of the log probabilities.
return_best_hypothesis – bool, decides whether to return a single hypothesis (the best out of N), or return all N hypothesis (sorted with best score first). The container class changes based this flag  When set to True (default), returns a single Hypothesis. When set to False, returns a NBestHypotheses container, which contains a list of Hypothesis.
search_type –
tsd_max_sym_exp_per_step – Used for search_type=tsd. The maximum symmetric expansions allowed per timestep during beam search. Larger values should be used to attempt decoding of longer sequences, but this in turn increases execution time and memory usage.
alsd_max_target_len – Used for search_type=alsd. The maximum expected target sequence length during beam search. Larger values allow decoding of longer sequences at the expense of execution time and memory.
stabilized. (# The following two flags are placeholders and unused until nsc implementation is) –
nsc_max_timesteps_expansion – Unused int.
nsc_prefix_alpha – Unused int.
flags (# mAES) –
maes_num_steps – Number of adaptive steps to take. From the paper, 2 steps is generally sufficient. int > 1.
maes_prefix_alpha – Maximum prefix length in prefix search. Must be an integer, and is advised to keep this as 1 in order to reduce expensive beam search cost later. int >= 0.
maes_expansion_beta – Maximum number of prefix expansions allowed, in addition to the beam size. Effectively, the number of hypothesis = beam_size + maes_expansion_beta. Must be an int >= 0, and affects the speed of inference since large values will perform large beam search in the next step.
maes_expansion_gamma – Float pruning threshold used in the prunebyvalue step when computing the expansions. The default (2.3) is selected from the paper. It performs a comparison (max_log_prob  gamma <= log_prob[v]) where v is all vocabulary indices in the Vocab set and max_log_prob is the “most” likely token to be predicted. Gamma therefore provides a margin of additional tokens which can be potential candidates for expansion apart from the “most likely” candidate. Lower values will reduce the number of expansions (by increasing pruningbyvalue, thereby improving speed but hurting accuracy). Higher values will increase the number of expansions (by reducing pruningbyvalue, thereby reducing speed but potentially improving accuracy). This is a hyper parameter to be experimentally tuned on a validation set.
softmax_temperature – Scales the logits of the joint prior to computing log_softmax.
preserve_alignments –
Bool flag which preserves the history of alignments generated during beam decoding (sample). When set to true, the Hypothesis will contain the nonnull value for alignments in it. Here, alignments is a List of List of Tensor (of length V + 1).
The length of the list corresponds to the Acoustic Length (T). Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary. U is the number of target tokens for the current timestep Ti.
NOTE: preserve_alignments is an invalid argument for any search_type other than basic beam search.
ngram_lm_model – str The path to the Ngram LM
ngram_lm_alpha – float Alpha weight of Ngram LM
tokens_type – str Tokenization type [‘subword’, ‘char’]
 align_length_sync_decoding(h: torch.Tensor, encoded_lengths: torch.Tensor, partial_hypotheses: Optional[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis] = None) → List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]
Alignmentlength synchronous beam search implementation. Based on https://ieeexplore.ieee.org/document/9053040
 Parameters
 Returns
 Return type
h – Encoded speech features (1, T_max, D_enc)
Nbest decoding results
nbest_hyps
 compute_ngram_score(current_lm_state: kenlm.State, label: int) → Tuple[float, kenlm.State]
Score computation for kenlm ngram language model.
 default_beam_search(h: torch.Tensor, encoded_lengths: torch.Tensor, partial_hypotheses: Optional[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis] = None) → List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]
Beam search implementation.
 Parameters
 Returns
 Return type
x – Encoded speech features (1, T_max, D_enc)
Nbest decoding results
nbest_hyps
 greedy_search(h: torch.Tensor, encoded_lengths: torch.Tensor, partial_hypotheses: Optional[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis] = None) → List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]
Greedy search implementation for transducer. Generic case when beam size = 1. Results might differ slightly due to implementation details as compared to GreedyRNNTInfer and GreedyBatchRNNTInfer.
 Parameters
 Returns
 Return type
h – Encoded speech features (1, T_max, D_enc)
1best decoding results
hyp
 property input_types
Returns definitions of module input ports.
 modified_adaptive_expansion_search(h: torch.Tensor, encoded_lengths: torch.Tensor, partial_hypotheses: Optional[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis] = None) → List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]
Based on/modified from https://ieeexplore.ieee.org/document/9250505
 Parameters
 Returns
 Return type
h – Encoded speech features (1, T_max, D_enc)
Nbest decoding results
nbest_hyps
 property output_types
Returns definitions of module output ports.
 prefix_search(hypotheses: List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis], enc_out: torch.Tensor, prefix_alpha: int) → List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]
Prefix search for NSC and mAES strategies. Based on https://arxiv.org/pdf/1211.3711.pdf
 recombine_hypotheses(hypotheses: List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]) → List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]
Recombine hypotheses with equivalent output sequence.
 Parameters
 Returns
 Return type
hypotheses (list) – list of hypotheses
list of recombined hypotheses
final (list)
 resolve_joint_output(enc_out: torch.Tensor, dec_out: torch.Tensor) → Tuple[torch.Tensor, torch.Tensor]
Resolve output types for RNNT and HAT joint models
 sort_nbest(hyps: List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]) → List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]
Sort hypotheses by score or score given sequence length.
 Parameters
 Returns
 Return type
hyps – list of hypotheses
sorted list of hypotheses
hyps
 time_sync_decoding(h: torch.Tensor, encoded_lengths: torch.Tensor, partial_hypotheses: Optional[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis] = None) → List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]
Time synchronous beam search implementation. Based on https://ieeexplore.ieee.org/document/9053040
 Parameters
 Returns
 Return type
h – Encoded speech features (1, T_max, D_enc)
Nbest decoding results
nbest_hyps
Hypotheses
 class nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis(score: float, y_sequence: typing.Union[typing.List[int], torch.Tensor], text: typing.Optional[str] = None, dec_out: typing.Optional[typing.List[torch.Tensor]] = None, dec_state: typing.Optional[typing.Union[typing.List[typing.List[torch.Tensor]], typing.List[torch.Tensor]]] = None, timestep: typing.Union[typing.List[int], torch.Tensor] = <factory>, alignments: typing.Optional[typing.Union[typing.List[int], typing.List[typing.List[int]]]] = None, frame_confidence: typing.Optional[typing.Union[typing.List[float], typing.List[typing.List[float]]]] = None, token_confidence: typing.Optional[typing.List[float]] = None, word_confidence: typing.Optional[typing.List[float]] = None, length: typing.Union[int, torch.Tensor] = 0, y: typing.Optional[typing.List[torch.tensor]] = None, lm_state: typing.Optional[typing.Union[typing.Dict[str, typing.Any], typing.List[typing.Any]]] = None, lm_scores: typing.Optional[torch.Tensor] = None, ngram_lm_state: typing.Optional[typing.Union[typing.Dict[str, typing.Any], typing.List[typing.Any]]] = None, tokens: typing.Optional[typing.Union[typing.List[int], torch.Tensor]] = None, last_token: typing.Optional[torch.Tensor] = None)
Bases:
object
Hypothesis class for beam search algorithms.
score: A float score obtained from an AbstractRNNTDecoder module’s score_hypothesis method.
 y_sequence: Either a sequence of integer ids pointing to some vocabulary, or a packed torch.Tensor
behaving in the same manner. dtype must be torch.Long in the latter case.
dec_state: A list (or list of list) of LSTMRNN decoder states. Can be None.
 text: (Optional) A decoded string after processing via CTC / RNNT decoding (removing the CTC/RNNT
 timestep: (Optional) A list of integer indices representing at which index in the decoding
 alignments: (Optional) Represents the CTC / RNNT token alignments as integer tokens along an axis of
 frame_confidence: (Optional) Represents the CTC / RNNT perframe confidence scores as token probabilities
 token_confidence: (Optional) Represents the CTC / RNNT pertoken confidence scores as token probabilities
 word_confidence: (Optional) Represents the CTC / RNNT perword confidence scores as token probabilities
 length: Represents the length of the sequence (the original length without padding), otherwise
blank tokens, and optionally merging wordpieces). Should be used as decoded string for Word Error Rate calculation.
process did the token appear. Should be of same length as the number of nonblank tokens.
time T (for CTC) or Time x Target (TxU). For CTC, represented as a single list of integer indices. For RNNT, represented as a dangling list of list of integer indices. Outer list represents Time dimension (T), inner list represents Target dimension (U). The set of valid indices includes the CTC / RNNT blank token in order to represent alignments.
along an axis of time T (for CTC) or Time x Target (TxU). For CTC, represented as a single list of float indices. For RNNT, represented as a dangling list of list of float indices. Outer list represents Time dimension (T), inner list represents Target dimension (U).
along an axis of Target U. Represented as a single list of float indices.
along an axis of Target U. Represented as a single list of float indices.
defaults to 0.
y: (Unused) A list of torch.Tensors representing the list of hypotheses.
lm_state: (Unused) A dictionary state cache used by an external Language Model.
lm_scores: (Unused) Score of the external Language Model.
ngram_lm_state: (Optional) State of the external ngram Language Model.
tokens: (Optional) A list of decoded tokens (can be characters or wordpieces.
last_token (Optional): A token or batch of tokens which was predicted in the last step.
 class nemo.collections.asr.parts.utils.rnnt_utils.NBestHypotheses(n_best_hypotheses: Optional[List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]])
Bases: object
List of N best hypotheses
Adapter Networks
 class nemo.collections.asr.parts.submodules.adapters.multi_head_attention_adapter_module.MultiHeadAttentionAdapter(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.parts.submodules.multi_head_attention.MultiHeadAttention
,nemo.collections.common.parts.adapter_modules.AdapterModuleUtil
MultiHead Attention layer of Transformer.
 Parameters
n_head (int) – number of heads
n_feat (int) – size of the features
dropout_rate (float) – dropout rate
proj_dim – Optional integer value for projection before computing attention. If None, then there is no projection (equivalent to proj_dim = n_feat). If > 0, then will project the n_feat to proj_dim before calculating attention. If <0, then will equal n_head, so that each head has a projected dimension of 1.
 forward(query, key, value, mask, pos_emb=None, cache=None)
Compute ‘Scaled Dot Product Attention’. :param query: (batch, time1, size) :type query: torch.Tensor :param key: (batch, time2, size) :type key: torch.Tensor :param value: (batch, time2, size) :type value: torch.Tensor :param mask: (batch, time1, time2) :type mask: torch.Tensor :param cache: (batch, time_cache, size) :type cache: torch.Tensor
 Returns
 Return type
transformed value (batch, time1, d_model) weighted by the query dot key attention cache (torch.Tensor) : (batch, time_cache_next, size)
output (torch.Tensor)
 get_default_strategy_config() → dataclasses.dataclass
Returns a default adapter module strategy.
 class nemo.collections.asr.parts.submodules.adapters.multi_head_attention_adapter_module.RelPositionMultiHeadAttentionAdapter(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.parts.submodules.multi_head_attention.RelPositionMultiHeadAttention
,nemo.collections.common.parts.adapter_modules.AdapterModuleUtil
MultiHead Attention layer of TransformerXL with support of relative positional encoding. Paper: https://arxiv.org/abs/1901.02860
 Parameters
n_head (int) – number of heads
n_feat (int) – size of the features
dropout_rate (float) – dropout rate
proj_dim (int, optional) – Optional integer value for projection before computing attention. If None, then there is no projection (equivalent to proj_dim = n_feat). If > 0, then will project the n_feat to proj_dim before calculating attention. If <0, then will equal n_head, so that each head has a projected dimension of 1.
adapter_strategy – By default, MHAResidualAddAdapterStrategyConfig. An adapter composition function object.
 forward(query, key, value, mask, pos_emb, cache=None)
Compute ‘Scaled Dot Product Attention’ with rel. positional encoding. :param query: (batch, time1, size) :type query: torch.Tensor :param key: (batch, time2, size) :type key: torch.Tensor :param value: (batch, time2, size) :type value: torch.Tensor :param mask: (batch, time1, time2) :type mask: torch.Tensor :param pos_emb: (batch, time1, size) :type pos_emb: torch.Tensor :param cache: (batch, time_cache, size) :type cache: torch.Tensor
 Returns
 Return type
transformed value (batch, time1, d_model) weighted by the query dot key attention cache_next (torch.Tensor) : (batch, time_cache_next, size)
output (torch.Tensor)
 get_default_strategy_config() → dataclasses.dataclass
Returns a default adapter module strategy.
 class nemo.collections.asr.parts.submodules.adapters.multi_head_attention_adapter_module.PositionalEncodingAdapter(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.parts.submodules.multi_head_attention.PositionalEncoding
,nemo.collections.common.parts.adapter_modules.AdapterModuleUtil
Absolute positional embedding adapter.
NoteAbsolute positional embedding value is added to the input tensor without residual connection ! Therefore, the input is changed, if you only require the positional embedding, drop the returned x !
 Parameters
d_model (int) – The input dimension of x.
max_len (int) – The max sequence length.
xscale (float) – The input scaling factor. Defaults to 1.0.
adapter_strategy (AbstractAdapterStrategy) – By default, ReturnResultAdapterStrategyConfig. An adapter composition function object. NOTE: Since this is a positional encoding, it will not add a residual !
 get_default_strategy_config() → dataclasses.dataclass
Returns a default adapter module strategy.
 class nemo.collections.asr.parts.submodules.adapters.multi_head_attention_adapter_module.RelPositionalEncodingAdapter(*args: Any, **kwargs: Any)
Bases:
nemo.collections.asr.parts.submodules.multi_head_attention.RelPositionalEncoding
,nemo.collections.common.parts.adapter_modules.AdapterModuleUtil
Relative positional encoding for TransformerXL’s layers See : Appendix B in https://arxiv.org/abs/1901.02860
NoteRelative positional embedding value is not added to the input tensor ! Therefore, the input should be updated changed, if you only require the positional embedding, drop the returned x !
 Parameters
d_model (int) – embedding dim
max_len (int) – maximum input length
xscale (bool) – whether to scale the input by sqrt(d_model)
adapter_strategy – By default, ReturnResultAdapterStrategyConfig. An adapter composition function object.
 get_default_strategy_config() → dataclasses.dataclass
Returns a default adapter module strategy.
Adapter Strategies
 class nemo.collections.asr.parts.submodules.adapters.multi_head_attention_adapter_module.MHAResidualAddAdapterStrategy(stochastic_depth: float = 0.0, l2_lambda: float = 0.0)
Bases:
nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategy
An implementation of residual addition of an adapter module with its input for the MHA Adapters.
 forward(input: torch.Tensor, adapter: torch.nn.Module, *, module: AdapterModuleMixin)
A basic strategy, comprising of a residual connection over the input, after forward pass by the underlying adapter. Additional work is done to pack and unpack the dictionary of inputs and outputs.
Note: The value tensor is added to the output of the attention adapter as the residual connection.
 Parameters
input –
A dictionary of multiple input arguments for the adapter module.
 query, key, value: Original output tensor of the module, or the output of the
previous adapter (if more than one adapters are enabled).
mask: Attention mask.
pos_emb: Optional positional embedding for relative encoding.
adapter – The adapter module that is currently required to perform the forward pass.
module – The calling module, in its entirety. It is a module that implements AdapterModuleMixin, therefore the strategy can access all other adapters in this module via module.adapter_layer.
 Returns
The result tensor, after one of the active adapters has finished its forward passes.
 compute_output(input: torch.Tensor, adapter: torch.nn.Module, *, module: AdapterModuleMixin) → torch.Tensor
Compute the output of a single adapter to some input.
 Parameters
input – Original output tensor of the module, or the output of the previous adapter (if more than one adapters are enabled).
adapter – The adapter module that is currently required to perform the forward pass.
module – The calling module, in its entirety. It is a module that implements AdapterModuleMixin, therefore the strategy can access all other adapters in this module via module.adapter_layer.
 Returns
The result tensor, after one of the active adapters has finished its forward passes.