Important
NeMo 2.0 is an experimental feature and currently released in the dev container only: nvcr.io/nvidia/nemo:dev. Please refer to NeMo 2.0 overview for information on getting started.
NeMo SSL collection API
Model Classes
Mixins
- class nemo.collections.asr.parts.mixins.mixins.ASRModuleMixin
Bases:
nemo.collections.asr.parts.mixins.asr_adapter_mixins.ASRAdapterModelMixin
ASRModuleMixin is a mixin class added to ASR models in order to add methods that are specific to a particular instantiation of a module inside of an ASRModel.
Each method should first check that the module is present within the subclass, and support additional functionality if the corresponding module is present.
- change_attention_model(self_attention_model: Optional[str] = None, att_context_size: Optional[List[int]] = None, update_config: bool = True)
Update the self_attention_model if function is available in encoder.
- Parameters
self_attention_model (str) –
type of the attention layer and positional encoding
- ’rel_pos’:
relative positional embedding and Transformer-XL
- ’rel_pos_local_attn’:
relative positional embedding and Transformer-XL with local attention using overlapping windows. Attention context is determined by att_context_size parameter.
- ’abs_pos’:
absolute positional embedding and Transformer
If None is provided, the self_attention_model isn’t changed. Defauts to None.
att_context_size (List[int]) – List of 2 ints corresponding to left and right attention context sizes, or None to keep as it is. Defauts to None.
update_config (bool) – Whether to update the config or not with the new attention model. Defaults to True.
- change_conv_asr_se_context_window(context_window: int, update_config: bool = True)
Update the context window of the SqueezeExcitation module if the provided model contains an encoder which is an instance of ConvASREncoder.
- Parameters
context_window –
An integer representing the number of input timeframes that will be used to compute the context. Each timeframe corresponds to a single window stride of the STFT features.
Say the window_stride = 0.01s, then a context window of 128 represents 128 * 0.01 s of context to compute the Squeeze step.
update_config – Whether to update the config or not with the new context window.
- change_subsampling_conv_chunking_factor(subsampling_conv_chunking_factor: int, update_config: bool = True)
Update the conv_chunking_factor (int) if function is available in encoder. Default is 1 (auto) Set it to -1 (disabled) or to a specific value (power of 2) if you OOM in the conv subsampling layers
- Parameters
conv_chunking_factor (int) –
- conformer_stream_step(processed_signal: torch.Tensor, processed_signal_length: Optional[torch.Tensor] = None, cache_last_channel: Optional[torch.Tensor] = None, cache_last_time: Optional[torch.Tensor] = None, cache_last_channel_len: Optional[torch.Tensor] = None, keep_all_outputs: bool = True, previous_hypotheses: Optional[List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]] = None, previous_pred_out: Optional[torch.Tensor] = None, drop_extra_pre_encoded: Optional[int] = None, return_transcription: bool = True, return_log_probs: bool = False)
It simulates a forward step with caching for streaming purposes. It supports the ASR models where their encoder supports streaming like Conformer. :param processed_signal: the input audio signals :param processed_signal_length: the length of the audios :param cache_last_channel: the cache tensor for last channel layers like MHA :param cache_last_channel_len: engths for cache_last_channel :param cache_last_time: the cache tensor for last time layers like convolutions :param keep_all_outputs: if set to True, would not drop the extra outputs specified by encoder.streaming_cfg.valid_out_len :param previous_hypotheses: the hypotheses from the previous step for RNNT models :param previous_pred_out: the predicted outputs from the previous step for CTC models :param drop_extra_pre_encoded: number of steps to drop from the beginning of the outputs after the downsampling module. This can be used if extra paddings are added on the left side of the input. :param return_transcription: whether to decode and return the transcriptions. It can not get disabled for Transducer models. :param return_log_probs: whether to return the log probs, only valid for ctc model
- Returns
the greedy predictions from the decoder all_hyp_or_transcribed_texts: the decoder hypotheses for Transducer models and the transcriptions for CTC models cache_last_channel_next: the updated tensor cache for last channel layers to be used for next streaming step cache_last_time_next: the updated tensor cache for last time layers to be used for next streaming step cache_last_channel_next_len: the updated lengths for cache_last_channel best_hyp: the best hypotheses for the Transducer models log_probs: the logits tensor of current streaming chunk, only returned when return_log_probs=True encoded_len: the length of the output log_probs + history chunk log_probs, only returned when return_log_probs=True
- Return type
greedy_predictions
- transcribe_simulate_cache_aware_streaming(paths2audio_files: List[str], batch_size: int = 4, logprobs: bool = False, return_hypotheses: bool = False, online_normalization: bool = False)
- Parameters
paths2audio_files – (a list) of paths to audio files.
batch_size – (int) batch size to use during inference. Bigger will result in better throughput performance but would use more memory.
logprobs – (bool) pass True to get log probabilities instead of transcripts.
return_hypotheses – (bool) Either return hypotheses or text With hypotheses can do some postprocessing like getting timestamp or rescoring
online_normalization – (bool) Perform normalization on the run per chunk.
- Returns
A list of transcriptions (or raw log probabilities if logprobs is True) in the same order as paths2audio_files
- class nemo.core.classes.mixins.access_mixins.AccessMixin
Bases:
abc.ABC
Allows access to output of intermediate layers of a model
- property access_cfg
Returns: The global access config shared across all access mixin modules.
- classmethod get_module_registry(module: torch.nn.Module)
Extract all registries from named submodules, return dictionary where the keys are the flattened module names, the values are the internal registry of each such module.
- register_accessible_tensor(name, tensor)
Register tensor for later use.
- reset_registry(registry_key: Optional[str] = None)
Reset the registries of all named sub-modules