NeMo Speaker Recognition API#

Model Classes#

class nemo.collections.asr.models.label_models.EncDecSpeakerLabelModel(
cfg: DictConfig,
trainer: Trainer = None,
)[source]#

Bases: ModelPT, ExportableEncDecModel, VerificationMixin

Encoder decoder class for speaker label models. Model class creates training, validation methods for setting up data performing model forward pass. Expects config dict for

  • preprocessor

  • Jasper/Quartznet Encoder

  • Speaker Decoder

batch_inference(
manifest_filepath,
batch_size=32,
sample_rate=16000,
device='cuda',
)[source]#

Perform batch inference on EncDecSpeakerLabelModel. To perform inference on single audio file, once can use infer_model, get_label or get_embedding

To map predicted labels, one can do

arg_values = logits.argmax(axis=1) pred_labels = list(map(lambda t : trained_labels[t], arg_values))

Parameters:
  • manifest_filepath – Path to manifest file

  • batch_size – batch size to perform batch inference

  • sample_rate – sample rate of audio files in manifest file

  • device – compute device to perform operations.

Returns:

The variables below all follow the audio file order in the manifest file. embs: embeddings of files provided in manifest file logits: logits of final layer of EncDecSpeakerLabel Model gt_labels: labels from manifest file (needed for speaker enrollment and testing) trained_labels: Classification labels sorted in the order that they are mapped by the trained model

evaluation_step(
batch,
batch_idx,
dataloader_idx: int = 0,
tag: str = 'val',
)[source]#
static extract_labels(data_layer_config)[source]#
forward(input_signal, input_signal_length)[source]#

Same as torch.nn.Module.forward().

Parameters:
  • *args – Whatever you decide to pass into the forward method.

  • **kwargs – Keyword arguments are also possible.

Returns:

Your model’s output

forward_for_export(audio_signal, length)[source]#

This forward is used when we need to export the model to ONNX format. Inputs cache_last_channel and cache_last_time are needed to be passed for exporting streaming models.

Parameters:
  • input – Tensor that represents a batch of raw audio signals of shape [B, T]. T here represents timesteps.

  • length – Vector of length B, that contains the individual lengths of the audio sequences.

  • cache_last_channel – Tensor of shape [N, B, T, H] which contains the cache for last channel layers

  • cache_last_time – Tensor of shape [N, B, H, T] which contains the cache for last time layers N is the number of such layers which need caching, B is batch size, H is the hidden size of activations, and T is the length of the cache

Returns:

the output of the model

get_embedding(path2audio_file)[source]#

Returns the speaker embeddings for a provided audio file.

Parameters:

path2audio_file – path to an audio wav file

Returns:

speaker embeddings (Audio representations)

Return type:

emb

get_label(
path2audio_file: str,
segment_duration: float = inf,
num_segments: int = 1,
random_seed: int = None,
)[source]#

Returns label of path2audio_file from classes the model was trained on. :param path2audio_file: Path to audio wav file. :type path2audio_file: str :param segment_duration: Random sample duration in seconds. :type segment_duration: float :param num_segments: Number of segments of file to use for majority vote. :type num_segments: int :param random_seed: Seed for generating the starting position of the segment. :type random_seed: int

Returns:

label corresponding to the trained model

Return type:

label

infer_file(path2audio_file)[source]#
Parameters:

path2audio_file – path to an audio wav file

Returns:

speaker embeddings (Audio representations) logits: logits corresponding of final layer

Return type:

emb

infer_segment(segment)[source]#
Parameters:

segment – segment of audio file

Returns:

speaker embeddings (Audio representations) logits: logits corresponding of final layer

Return type:

emb

property input_types: Dict[str, NeuralType] | None#

Define these to enable input neural type checks

classmethod list_available_models() List[PretrainedModelInfo][source]#

This method returns a list of pre-trained model which can be instantiated directly from NVIDIA’s NGC cloud. :returns: List of available pre-trained models.

multi_evaluation_epoch_end(
outputs,
dataloader_idx: int = 0,
tag: str = 'val',
)[source]#
multi_test_epoch_end(
outputs,
dataloader_idx: int = 0,
)[source]#

Adds support for multiple test datasets. Should be overriden by subclass, so as to obtain appropriate logs for each of the dataloaders.

Parameters:
  • outputs – Same as that provided by LightningModule.on_validation_epoch_end() for a single dataloader.

  • dataloader_idx – int representing the index of the dataloader.

Returns:

A dictionary of values, optionally containing a sub-dict log, such that the values in the log will be pre-pended by the dataloader prefix.

multi_validation_epoch_end(
outputs,
dataloader_idx: int = 0,
)[source]#

Adds support for multiple validation datasets. Should be overriden by subclass, so as to obtain appropriate logs for each of the dataloaders.

Parameters:
  • outputs – Same as that provided by LightningModule.on_validation_epoch_end() for a single dataloader.

  • dataloader_idx – int representing the index of the dataloader.

Returns:

A dictionary of values, optionally containing a sub-dict log, such that the values in the log will be pre-pended by the dataloader prefix.

property output_types: Dict[str, NeuralType] | None#

Define these to enable output neural type checks

pair_evaluation_step(
batch,
batch_idx,
dataloader_idx: int = 0,
tag: str = 'val',
)[source]#
pair_multi_eval_epoch_end(
outputs,
dataloader_idx: int = 0,
tag: str = 'val',
)[source]#
setup_test_data(
test_data_layer_params: DictConfig | Dict | None,
)[source]#

(Optionally) Setups data loader to be used in test

Parameters:

test_data_layer_config – test data layer parameters.

Returns:

setup_training_data(
train_data_layer_config: DictConfig | Dict | None,
)[source]#

Setups data loader to be used in training

Parameters:

train_data_layer_config – training data layer parameters.

Returns:

setup_validation_data(
val_data_layer_config: DictConfig | Dict | None,
)[source]#

Setups data loader to be used in validation :param val_data_layer_config: validation data layer parameters.

Returns:

test_dataloader()[source]#

Get the test dataloader.

test_step(
batch,
batch_idx,
dataloader_idx: int = 0,
)[source]#

Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.

Parameters:
  • batch – The output of your data iterable, normally a DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)

Returns:

  • Tensor - The loss tensor

  • dict - A dictionary. Can include any keys, but must include the key 'loss'.

  • None - Skip to the next batch.

# if you have one test dataloader:
def test_step(self, batch, batch_idx): ...


# if you have multiple test dataloaders:
def test_step(self, batch, batch_idx, dataloader_idx=0): ...

Examples:

# CASE 1: A single test dataset
def test_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'test_loss': loss, 'test_acc': test_acc})

If you pass in multiple test dataloaders, test_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple test dataloaders
def test_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to test you don’t need to implement this method.

Note

When the test_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.

training_step(batch, batch_idx)[source]#

Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.

Parameters:
  • batch – The output of your data iterable, normally a DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)

Returns:

  • Tensor - The loss tensor

  • dict - A dictionary which can include any keys, but must include the key 'loss' in the case of automatic optimization.

  • None - In automatic optimization, this will skip to the next batch (but is not supported for multi-GPU, TPU, or DeepSpeed). For manual optimization, this has no special meaning, as returning the loss is not required.

In this step you’d normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.

Example:

def training_step(self, batch, batch_idx):
    x, y, z = batch
    out = self.encoder(x)
    loss = self.loss(out, x)
    return loss

To use multiple optimizers, you can switch to ‘manual optimization’ and control their stepping:

def __init__(self):
    super().__init__()
    self.automatic_optimization = False


# Multiple optimizers (e.g.: GANs)
def training_step(self, batch, batch_idx):
    opt1, opt2 = self.optimizers()

    # do training_step with encoder
    ...
    opt1.step()
    # do training_step with decoder
    ...
    opt2.step()

Note

When accumulate_grad_batches > 1, the loss returned here will be automatically normalized by accumulate_grad_batches internally.

validation_step(
batch,
batch_idx,
dataloader_idx: int = 0,
)[source]#

Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.

Parameters:
  • batch – The output of your data iterable, normally a DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)

Returns:

  • Tensor - The loss tensor

  • dict - A dictionary. Can include any keys, but must include the key 'loss'.

  • None - Skip to the next batch.

# if you have one val dataloader:
def validation_step(self, batch, batch_idx): ...


# if you have multiple val dataloaders:
def validation_step(self, batch, batch_idx, dataloader_idx=0): ...

Examples:

# CASE 1: A single validation dataset
def validation_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'val_loss': loss, 'val_acc': val_acc})

If you pass in multiple val dataloaders, validation_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple validation dataloaders
def validation_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to validate you don’t need to implement this method.

Note

When the validation_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.

verify_speakers(
path2audio_file1,
path2audio_file2,
threshold=0.7,
)[source]#

Verify if two audio files are from the same speaker or not.

Parameters:
  • path2audio_file1 – path to audio wav file of speaker 1

  • path2audio_file2 – path to audio wav file of speaker 2

  • threshold – cosine similarity score used as a threshold to distinguish two embeddings (default = 0.7)

Returns:

True if both audio files are from same speaker, False otherwise

verify_speakers_batch(
audio_files_pairs,
threshold=0.7,
batch_size=32,
sample_rate=16000,
device='cuda',
)[source]#

Verify if audio files from the first and second manifests are from the same speaker or not.

Parameters:
  • audio_files_pairs – list of tuples with audio_files pairs to be verified

  • threshold – cosine similarity score used as a threshold to distinguish two embeddings (default = 0.7)

  • batch_size – batch size to perform batch inference

  • sample_rate – sample rate of audio files in manifest file

  • device – compute device to perform operations.

Returns:

True if both audio pair is from same speaker, False otherwise