NeMo SSL collection API

class nemo.collections.asr.models.SpeechEncDecSelfSupervisedModel(*args: Any, **kwargs: Any)

Bases: nemo.core.classes.modelPT.ModelPT, nemo.collections.asr.parts.mixins.mixins.ASRModuleMixin, nemo.core.classes.mixins.access_mixins.AccessMixin

Base class for encoder-decoder models used for self-supervised encoder pre-training

decoder_loss_step(spectrograms, spec_masks, encoded, encoded_len, targets=None, target_lengths=None)

Forward pass through all decoders and calculate corresponding losses. :param spectrograms: Processed spectrograms of shape [B, D, T]. :param spec_masks: Masks applied to spectrograms of shape [B, D, T]. :param encoded: The encoded features tensor of shape [B, D, T]. :param encoded_len: The lengths of the acoustic sequence after propagation through the encoder, of shape [B]. :param targets: Optional target labels of shape [B, T] :param target_lengths: Optional target label lengths of shape [B]

Returns

A tuple of 2 elements - 1) Total sum of losses weighted by corresponding loss_alphas 2) Dictionary of unweighted losses

forward(input_signal=None, input_signal_length=None, processed_signal=None, processed_signal_length=None)

Forward pass of the model.

Parameters
  • input_signal – Tensor that represents a batch of raw audio signals, of shape [B, T]. T here represents timesteps, with 1 second of audio represented as self.sample_rate number of floating point values.

  • input_signal_length – Vector of length B, that contains the individual lengths of the audio sequences.

  • processed_signal – Tensor that represents a batch of processed audio signals, of shape (B, D, T) that has undergone processing via some DALI preprocessor.

  • processed_signal_length – Vector of length B, that contains the individual lengths of the processed audio sequences.

Returns

A tuple of 4 elements - 1) Processed spectrograms of shape [B, D, T]. 2) Masks applied to spectrograms of shape [B, D, T]. 3) The encoded features tensor of shape [B, D, T]. 2) The lengths of the acoustic sequence after propagation through the encoder, of shape [B].

property input_types: Optional[Dict[str, nemo.core.neural_types.neural_type.NeuralType]]

Define these to enable input neural type checks

classmethod list_available_models() → List[nemo.core.classes.common.PretrainedModelInfo]

This method returns a list of pre-trained model which can be instantiated directly from NVIDIA’s NGC cloud.

Returns

List of available pre-trained models.

multi_validation_epoch_end(outputs, dataloader_idx: int = 0)

Adds support for multiple validation datasets. Should be overriden by subclass, so as to obtain appropriate logs for each of the dataloaders.

Parameters
  • outputs – Same as that provided by LightningModule.on_validation_epoch_end() for a single dataloader.

  • dataloader_idx – int representing the index of the dataloader.

Returns

A dictionary of values, optionally containing a sub-dict log, such that the values in the log will be pre-pended by the dataloader prefix.

property output_types: Optional[Dict[str, nemo.core.neural_types.neural_type.NeuralType]]

Define these to enable output neural type checks

setup_training_data(train_data_config: Optional[Union[omegaconf.DictConfig, Dict]])

Sets up the training data loader via a Dict-like object.

Parameters

train_data_config – A config that contains the information regarding construction of an ASR Training dataset.

Supported Datasets:
setup_validation_data(val_data_config: Optional[Union[omegaconf.DictConfig, Dict]])

Sets up the validation data loader via a Dict-like object.

Parameters

val_data_config – A config that contains the information regarding construction of an ASR Training dataset.

Supported Datasets:
class nemo.collections.asr.parts.mixins.mixins.ASRModuleMixin

Bases: nemo.collections.asr.parts.mixins.asr_adapter_mixins.ASRAdapterModelMixin

ASRModuleMixin is a mixin class added to ASR models in order to add methods that are specific to a particular instantiation of a module inside of an ASRModel.

Each method should first check that the module is present within the subclass, and support additional functionality if the corresponding module is present.

change_attention_model(self_attention_model: Optional[str] = None, att_context_size: Optional[List[int]] = None, update_config: bool = True)

Update the self_attention_model if function is available in encoder.

Parameters
  • self_attention_model (str) –

    type of the attention layer and positional encoding

    ’rel_pos’:

    relative positional embedding and Transformer-XL

    ’rel_pos_local_attn’:

    relative positional embedding and Transformer-XL with local attention using overlapping windows. Attention context is determined by att_context_size parameter.

    ’abs_pos’:

    absolute positional embedding and Transformer

    If None is provided, the self_attention_model isn’t changed. Defauts to None.

  • att_context_size (List[int]) – List of 2 ints corresponding to left and right attention context sizes, or None to keep as it is. Defauts to None.

  • update_config (bool) – Whether to update the config or not with the new attention model. Defaults to True.

change_conv_asr_se_context_window(context_window: int, update_config: bool = True)

Update the context window of the SqueezeExcitation module if the provided model contains an encoder which is an instance of ConvASREncoder.

Parameters
  • context_window

    An integer representing the number of input timeframes that will be used to compute the context. Each timeframe corresponds to a single window stride of the STFT features.

    Say the window_stride = 0.01s, then a context window of 128 represents 128 * 0.01 s of context to compute the Squeeze step.

  • update_config – Whether to update the config or not with the new context window.

change_subsampling_conv_chunking_factor(subsampling_conv_chunking_factor: int, update_config: bool = True)

Update the conv_chunking_factor (int) if function is available in encoder. Default is 1 (auto) Set it to -1 (disabled) or to a specific value (power of 2) if you OOM in the conv subsampling layers

Parameters

conv_chunking_factor (int) –

conformer_stream_step(processed_signal: torch.Tensor, processed_signal_length: Optional[torch.Tensor] = None, cache_last_channel: Optional[torch.Tensor] = None, cache_last_time: Optional[torch.Tensor] = None, cache_last_channel_len: Optional[torch.Tensor] = None, keep_all_outputs: bool = True, previous_hypotheses: Optional[List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]] = None, previous_pred_out: Optional[torch.Tensor] = None, drop_extra_pre_encoded: Optional[int] = None, return_transcription: bool = True, return_log_probs: bool = False)

It simulates a forward step with caching for streaming purposes. It supports the ASR models where their encoder supports streaming like Conformer. :param processed_signal: the input audio signals :param processed_signal_length: the length of the audios :param cache_last_channel: the cache tensor for last channel layers like MHA :param cache_last_channel_len: engths for cache_last_channel :param cache_last_time: the cache tensor for last time layers like convolutions :param keep_all_outputs: if set to True, would not drop the extra outputs specified by encoder.streaming_cfg.valid_out_len :param previous_hypotheses: the hypotheses from the previous step for RNNT models :param previous_pred_out: the predicted outputs from the previous step for CTC models :param drop_extra_pre_encoded: number of steps to drop from the beginning of the outputs after the downsampling module. This can be used if extra paddings are added on the left side of the input. :param return_transcription: whether to decode and return the transcriptions. It can not get disabled for Transducer models. :param return_log_probs: whether to return the log probs, only valid for ctc model

Returns

the greedy predictions from the decoder all_hyp_or_transcribed_texts: the decoder hypotheses for Transducer models and the transcriptions for CTC models cache_last_channel_next: the updated tensor cache for last channel layers to be used for next streaming step cache_last_time_next: the updated tensor cache for last time layers to be used for next streaming step cache_last_channel_next_len: the updated lengths for cache_last_channel best_hyp: the best hypotheses for the Transducer models log_probs: the logits tensor of current streaming chunk, only returned when return_log_probs=True encoded_len: the length of the output log_probs + history chunk log_probs, only returned when return_log_probs=True

Return type

greedy_predictions

transcribe_simulate_cache_aware_streaming(paths2audio_files: List[str], batch_size: int = 4, logprobs: bool = False, return_hypotheses: bool = False, online_normalization: bool = False)
Parameters
  • paths2audio_files – (a list) of paths to audio files.

  • batch_size – (int) batch size to use during inference. Bigger will result in better throughput performance but would use more memory.

  • logprobs – (bool) pass True to get log probabilities instead of transcripts.

  • return_hypotheses – (bool) Either return hypotheses or text With hypotheses can do some postprocessing like getting timestamp or rescoring

  • online_normalization – (bool) Perform normalization on the run per chunk.

Returns

A list of transcriptions (or raw log probabilities if logprobs is True) in the same order as paths2audio_files

class nemo.core.classes.mixins.access_mixins.AccessMixin

Bases: abc.ABC

Allows access to output of intermediate layers of a model

property access_cfg

Returns: The global access config shared across all access mixin modules.

classmethod get_module_registry(module: torch.nn.Module)

Extract all registries from named submodules, return dictionary where the keys are the flattened module names, the values are the internal registry of each such module.

register_accessible_tensor(name, tensor)

Register tensor for later use.

reset_registry(registry_key: Optional[str] = None)

Reset the registries of all named sub-modules

Previous NeMo SSL Configuration Files
Next Resources and Documentation
© Copyright 2023-2024, NVIDIA. Last updated on Apr 25, 2024.