NeMo Speech Intent Classification and Slot Filling collection API

class nemo.collections.asr.models.SLUIntentSlotBPEModel(*args: Any, **kwargs: Any)

Bases: nemo.collections.asr.models.asr_model.ASRModel, nemo.collections.asr.models.asr_model.ExportableEncDecModel, nemo.collections.asr.parts.mixins.mixins.ASRModuleMixin, nemo.collections.asr.parts.mixins.mixins.ASRBPEMixin, nemo.collections.asr.parts.mixins.transcription.ASRTranscriptionMixin

Model for end-to-end speech intent classification and slot filling, which is formulated as a speech-to-sequence task

forward(input_signal=None, input_signal_length=None, target_semantics=None, target_semantics_length=None, processed_signal=None, processed_signal_length=None)

Forward pass of the model.

Params:

input_signal: Tensor that represents a batch of raw audio signals, of shape [B, T]. T here represents timesteps, with 1 second of audio represented as self.sample_rate number of floating point values.

input_signal_length: Vector of length B, that contains the individual lengths of the audio sequences.

target_semantics: Tensor that represents a batch of semantic tokens, of shape [B, L].

target_semantics_length: Vector of length B, that contains the individual lengths of the semantic sequences.

processed_signal: Tensor that represents a batch of processed audio signals, of shape (B, D, T) that has undergone processing via some DALI preprocessor.

processed_signal_length: Vector of length B, that contains the individual lengths of the processed audio sequences.

Returns

A tuple of 3 elements - 1) The log probabilities tensor of shape [B, T, D]. 2) The lengths of the output sequence after decoder, of shape [B]. 3) The token predictions of the model of shape [B, T].

property input_types: Optional[Dict[str, nemo.core.neural_types.neural_type.NeuralType]]

Define these to enable input neural type checks

classmethod list_available_models() → Optional[nemo.core.classes.common.PretrainedModelInfo]

This method returns a list of pre-trained model which can be instantiated directly from NVIDIA’s NGC cloud.

Returns

List of available pre-trained models.

property output_types: Optional[Dict[str, nemo.core.neural_types.neural_type.NeuralType]]

Define these to enable output neural type checks

setup_test_data(test_data_config: Optional[Union[omegaconf.DictConfig, Dict]])

Sets up the test data loader via a Dict-like object.

Parameters

test_data_config – A config that contains the information regarding construction of an ASR Training dataset.

Supported Datasets:
setup_training_data(train_data_config: Optional[Union[omegaconf.DictConfig, Dict]])

Sets up the training data loader via a Dict-like object.

Parameters

train_data_config – A config that contains the information regarding construction of an ASR Training dataset.

Supported Datasets:
setup_validation_data(val_data_config: Optional[Union[omegaconf.DictConfig, Dict]])

Sets up the validation data loader via a Dict-like object.

Parameters

val_data_config – A config that contains the information regarding construction of an ASR Training dataset.

Supported Datasets:
transcribe(audio: List[str], batch_size: int = 4, return_hypotheses: bool = False, num_workers: int = 0, verbose: bool = True) → Union[List[str], List[Hypothesis], Tuple[List[str]], Tuple[List[Hypothesis]]]

Uses greedy decoding to transcribe audio files into SLU semantics. Use this method for debugging and prototyping.

Parameters
  • audio – (a list) of paths to audio files. Recommended length per file is between 5 and 25 seconds. But it is possible to pass a few hours long file if enough GPU memory is available.

  • batch_size – (int) batch size to use during inference. Bigger will result in better throughput performance but would use more memory.

  • return_hypotheses – (bool) Either return hypotheses or text With hypotheses can do some postprocessing like getting timestamp or rescoring

  • num_workers – (int) number of workers for DataLoader

  • verbose – (bool) whether to display tqdm progress bar

Returns

A list of transcriptions (or raw log probabilities if logprobs is True) in the same order as paths2audio_files

class nemo.collections.asr.parts.mixins.ASRModuleMixin

Bases: nemo.collections.asr.parts.mixins.asr_adapter_mixins.ASRAdapterModelMixin

ASRModuleMixin is a mixin class added to ASR models in order to add methods that are specific to a particular instantiation of a module inside of an ASRModel.

Each method should first check that the module is present within the subclass, and support additional functionality if the corresponding module is present.

change_attention_model(self_attention_model: Optional[str] = None, att_context_size: Optional[List[int]] = None, update_config: bool = True)

Update the self_attention_model if function is available in encoder.

Parameters
  • self_attention_model (str) –

    type of the attention layer and positional encoding

    ’rel_pos’:

    relative positional embedding and Transformer-XL

    ’rel_pos_local_attn’:

    relative positional embedding and Transformer-XL with local attention using overlapping windows. Attention context is determined by att_context_size parameter.

    ’abs_pos’:

    absolute positional embedding and Transformer

    If None is provided, the self_attention_model isn’t changed. Defauts to None.

  • att_context_size (List[int]) – List of 2 ints corresponding to left and right attention context sizes, or None to keep as it is. Defauts to None.

  • update_config (bool) – Whether to update the config or not with the new attention model. Defaults to True.

change_conv_asr_se_context_window(context_window: int, update_config: bool = True)

Update the context window of the SqueezeExcitation module if the provided model contains an encoder which is an instance of ConvASREncoder.

Parameters
  • context_window

    An integer representing the number of input timeframes that will be used to compute the context. Each timeframe corresponds to a single window stride of the STFT features.

    Say the window_stride = 0.01s, then a context window of 128 represents 128 * 0.01 s of context to compute the Squeeze step.

  • update_config – Whether to update the config or not with the new context window.

change_subsampling_conv_chunking_factor(subsampling_conv_chunking_factor: int, update_config: bool = True)

Update the conv_chunking_factor (int) if function is available in encoder. Default is 1 (auto) Set it to -1 (disabled) or to a specific value (power of 2) if you OOM in the conv subsampling layers

Parameters

conv_chunking_factor (int) –

conformer_stream_step(processed_signal: torch.Tensor, processed_signal_length: Optional[torch.Tensor] = None, cache_last_channel: Optional[torch.Tensor] = None, cache_last_time: Optional[torch.Tensor] = None, cache_last_channel_len: Optional[torch.Tensor] = None, keep_all_outputs: bool = True, previous_hypotheses: Optional[List[nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis]] = None, previous_pred_out: Optional[torch.Tensor] = None, drop_extra_pre_encoded: Optional[int] = None, return_transcription: bool = True, return_log_probs: bool = False)

It simulates a forward step with caching for streaming purposes. It supports the ASR models where their encoder supports streaming like Conformer. :param processed_signal: the input audio signals :param processed_signal_length: the length of the audios :param cache_last_channel: the cache tensor for last channel layers like MHA :param cache_last_channel_len: engths for cache_last_channel :param cache_last_time: the cache tensor for last time layers like convolutions :param keep_all_outputs: if set to True, would not drop the extra outputs specified by encoder.streaming_cfg.valid_out_len :param previous_hypotheses: the hypotheses from the previous step for RNNT models :param previous_pred_out: the predicted outputs from the previous step for CTC models :param drop_extra_pre_encoded: number of steps to drop from the beginning of the outputs after the downsampling module. This can be used if extra paddings are added on the left side of the input. :param return_transcription: whether to decode and return the transcriptions. It can not get disabled for Transducer models. :param return_log_probs: whether to return the log probs, only valid for ctc model

Returns

the greedy predictions from the decoder all_hyp_or_transcribed_texts: the decoder hypotheses for Transducer models and the transcriptions for CTC models cache_last_channel_next: the updated tensor cache for last channel layers to be used for next streaming step cache_last_time_next: the updated tensor cache for last time layers to be used for next streaming step cache_last_channel_next_len: the updated lengths for cache_last_channel best_hyp: the best hypotheses for the Transducer models log_probs: the logits tensor of current streaming chunk, only returned when return_log_probs=True encoded_len: the length of the output log_probs + history chunk log_probs, only returned when return_log_probs=True

Return type

greedy_predictions

transcribe_simulate_cache_aware_streaming(paths2audio_files: List[str], batch_size: int = 4, logprobs: bool = False, return_hypotheses: bool = False, online_normalization: bool = False)
Parameters
  • paths2audio_files – (a list) of paths to audio files.

  • batch_size – (int) batch size to use during inference. Bigger will result in better throughput performance but would use more memory.

  • logprobs – (bool) pass True to get log probabilities instead of transcripts.

  • return_hypotheses – (bool) Either return hypotheses or text With hypotheses can do some postprocessing like getting timestamp or rescoring

  • online_normalization – (bool) Perform normalization on the run per chunk.

Returns

A list of transcriptions (or raw log probabilities if logprobs is True) in the same order as paths2audio_files

class nemo.collections.asr.parts.mixins.ASRBPEMixin

Bases: abc.ABC

ASR BPE Mixin class that sets up a Tokenizer via a config

This mixin class adds the method _setup_tokenizer(…), which can be used by ASR models which depend on subword tokenization.

The setup_tokenizer method adds the following parameters to the class -
  • tokenizer_cfg: The resolved config supplied to the tokenizer (with dir and type arguments).

  • tokenizer_dir: The directory path to the tokenizer vocabulary + additional metadata.

  • tokenizer_type: The type of the tokenizer. Currently supports bpe and wpe, as well as agg.

  • vocab_path: Resolved path to the vocabulary text file.

In addition to these variables, the method will also instantiate and preserve a tokenizer (subclass of TokenizerSpec) if successful, and assign it to self.tokenizer.

The mixin also supports aggregate tokenizers, which consist of ordinary, monolingual tokenizers. If a conversion between a monolongual and an aggregate tokenizer (or vice versa) is detected, all registered artifacts will be cleaned up.

save_tokenizers(directory: str)

Save the model tokenizer(s) to the specified directory.

Parameters

directory – The directory to save the tokenizer(s) to.

Previous NeMo Speech Intent Classification and Slot Filling Configuration Files
Next Resources and Documentation
© Copyright 2023-2024, NVIDIA. Last updated on Apr 22, 2024.