Core APIs

class nemo.core.ModelPT(*args: Any, **kwargs: Any)

Bases: pytorch_lightning.LightningModule, nemo.core.classes.common.Model

Interface for Pytorch-lightning based NeMo models

on_fit_start() → None

register_artifact(config_path: str, src: str, verify_src_exists: bool = True)

Register model artifacts with this function. These artifacts (files) will be included inside .nemo file when model.save_to(“mymodel.nemo”) is called.

How it works:

  1. It always returns existing absolute path which can be used during Model constructor call

    EXCEPTION: src is None or “” in which case nothing will be done and src will be returned

  2. It will add (config_path, model_utils.ArtifactItem()) pair to self.artifacts

    Copy
    Copied!
                

    If "src" is local existing path: then it will be returned in absolute path form. elif "src" starts with "nemo_file:unique_artifact_name": .nemo will be untarred to a temporary folder location and an actual existing path will be returned else: an error will be raised.


WARNING: use .register_artifact calls in your models’ constructors. The returned path is not guaranteed to exist after you have exited your model’s constructor.

Parameters
  • config_path (str) – Artifact key. Usually corresponds to the model config.

  • src (str) – Path to artifact.

  • verify_src_exists (bool) – If set to False, then the artifact is optional and register_artifact will return None even if src is not found. Defaults to True.

Returns

If src is not None or empty it always returns absolute path which is guaranteed to exist during model instance life

Return type

str

has_artifacts() → bool

Returns True if model has artifacts registered

has_native_or_submodules_artifacts() → bool

Returns True if it has artifacts or any of the submodules have artifacts

has_nemo_submodules() → bool

Returns True if it has any registered NeMo submodules

register_nemo_submodule(name: str, config_field: str, model: nemo.core.classes.modelPT.ModelPT) → None

Adds a NeMo model as a submodule. Submodule can be accessed via the name attribute on the parent NeMo model this submodule was registered on (self). In the saving process, the whole parent model (self) is held as a solid model with artifacts from the child submodule, the submodule config will be saved to the config_field of the parent model. This method is necessary to create a nested model, e.g.

Copy
Copied!
            

class ParentModel(ModelPT): def __init__(self, cfg, trainer=None): super().__init__(cfg=cfg, trainer=trainer) # annotate type for autocompletion and type checking (optional) self.child_model: Optional[ChildModel] = None if cfg.get("child_model") is not None: self.register_nemo_submodule( name="child_model", config_field="child_model", model=ChildModel(self.cfg.child_model, trainer=trainer), ) # ... other code

Parameters
  • name – name of the attribute for the submodule

  • config_field – field in config, where submodule config should be saved

  • model – NeMo model, instance of ModelPT

named_nemo_modules(prefix_name: str = '', prefix_config: str = '') → Iterator[Tuple[str, str, nemo.core.classes.modelPT.ModelPT]]

Returns an iterator over all NeMo submodules recursively, yielding tuples of (attribute path, path in config, submodule), starting from the core module

Parameters
  • prefix_name – prefix for the name path

  • prefix_config – prefix for the path in config

Returns

Iterator over (attribute path, path in config, submodule), starting from (prefix, self)

save_to(save_path: str)
Saves model instance (weights and configuration) into .nemo file

You can use “restore_from” method to fully restore instance from .nemo file.

.nemo file is an archive (tar.gz) with the following:

model_config.yaml - model configuration in .yaml format. You can deserialize this into cfg argument for model’s constructor model_wights.ckpt - model checkpoint

Parameters

save_path – Path to .nemo file where model instance should be saved

classmethod restore_from(restore_path: str, override_config_path: Optional[Union[omegaconf.OmegaConf, str]] = None, map_location: Optional[torch.device] = None, strict: bool = True, return_config: bool = False, save_restore_connector: Optional[nemo.core.connectors.save_restore_connector.SaveRestoreConnector] = None, trainer: Optional[pytorch_lightning.Trainer] = None)

Restores model instance (weights and configuration) from .nemo file.

Parameters
  • restore_path – path to .nemo file from which model should be instantiated

  • override_config_path – path to a yaml config that will override the internal config file or an OmegaConf / DictConfig object representing the model config.

  • map_location – Optional torch.device() to map the instantiated model to a device. By default (None), it will select a GPU if available, falling back to CPU otherwise.

  • strict – Passed to load_state_dict. By default True.

  • return_config – If set to true, will return just the underlying config of the restored model as an OmegaConf DictConfig object without instantiating the model.

  • trainer – Optional, a pytorch lightning Trainer object that will be forwarded to the instantiated model’s constructor.

  • save_restore_connector (SaveRestoreConnector) – Can be overridden to add custom save and restore logic.

  • Example

    ` model = nemo.collections.asr.models.EncDecCTCModel.restore_from('asr.nemo') assert isinstance(model, nemo.collections.asr.models.EncDecCTCModel) `

Returns

An instance of type cls or its underlying config (if return_config is set).

classmethod load_from_checkpoint(checkpoint_path: str, *args, map_location: Optional[Union[Dict[str, str], str, torch.device, int, Callable]] = None, hparams_file: Optional[str] = None, strict: bool = True, **kwargs)

Loads ModelPT from checkpoint, with some maintenance of restoration. For documentation, please refer to LightningModule.load_from_checkpoint() documentation.

abstract setup_training_data(train_data_config: Union[omegaconf.DictConfig, Dict])

Setups data loader to be used in training

Parameters

train_data_layer_config – training data layer parameters.

Returns:

abstract setup_validation_data(val_data_config: Union[omegaconf.DictConfig, Dict])

Setups data loader to be used in validation :param val_data_layer_config: validation data layer parameters.

Returns:

setup_test_data(test_data_config: Union[omegaconf.DictConfig, Dict])

(Optionally) Setups data loader to be used in test

Parameters

test_data_layer_config – test data layer parameters.

Returns:

setup_multiple_validation_data(val_data_config: Union[omegaconf.DictConfig, Dict])

(Optionally) Setups data loader to be used in validation, with support for multiple data loaders.

Parameters

val_data_layer_config – validation data layer parameters.

setup_multiple_test_data(test_data_config: Union[omegaconf.DictConfig, Dict])

(Optionally) Setups data loader to be used in test, with support for multiple data loaders.

Parameters

test_data_layer_config – test data layer parameters.

setup_optimization(optim_config: Optional[Union[omegaconf.DictConfig, Dict]] = None, optim_kwargs: Optional[Dict[str, Any]] = None)

Prepares an optimizer from a string name and its optional config parameters.

Parameters
  • optim_config

    A dictionary containing the following keys:

    • ”lr”: mandatory key for learning rate. Will raise ValueError if not provided.

    • ”optimizer”: string name pointing to one of the available optimizers in the registry. If not provided, defaults to “adam”.

    • ”opt_args”: Optional list of strings, in the format “arg_name=arg_value”. The list of “arg_value” will be parsed and a dictionary of optimizer kwargs will be built and supplied to instantiate the optimizer.

  • optim_kwargs – A dictionary with additional kwargs for the optimizer. Used for non-primitive types that are not compatible with OmegaConf.

setup_optimizer_param_groups()

Used to create param groups for the optimizer. As an example, this can be used to specify per-layer learning rates:

optim.SGD([

{‘params’: model.base.parameters()}, {‘params’: model.classifier.parameters(), ‘lr’: 1e-3} ], lr=1e-2, momentum=0.9)

See https://pytorch.org/docs/stable/optim.html for more information. By default, ModelPT will use self.parameters(). Override this method to add custom param groups. In the config file, add ‘optim_param_groups’ to support different LRs for different components (unspecified params will use the default LR):

model:
optim_param_groups:
encoder:

lr: 1e-4 momentum: 0.8

decoder:

lr: 1e-3

optim:

lr: 3e-3 momentum: 0.9

configure_optimizers()

propagate_model_guid()

Propagates the model GUID to all submodules, recursively.

setup(stage: Optional[str] = None)

Called at the beginning of fit, validate, test, or predict. This is called on every process when using DDP.

Parameters

stage – fit, validate, test or predict

train_dataloader()

val_dataloader()

test_dataloader()

on_validation_epoch_end() → Optional[Dict[str, Dict[str, torch.Tensor]]]

Default DataLoader for Validation set which automatically supports multiple data loaders via multi_validation_epoch_end.

If multi dataset support is not required, override this method entirely in base class. In such a case, there is no need to implement multi_validation_epoch_end either.

Note

If more than one data loader exists, and they all provide val_loss, only the val_loss of the first data loader will be used by default. This default can be changed by passing the special key val_dl_idx: int inside the validation_ds config.

Parameters

outputs – Single or nested list of tensor outputs from one or more data loaders.

Returns

A dictionary containing the union of all items from individual data_loaders, along with merged logs from all data loaders.

on_test_epoch_end() → Optional[Dict[str, Dict[str, torch.Tensor]]]

Default DataLoader for Test set which automatically supports multiple data loaders via multi_test_epoch_end.

If multi dataset support is not required, override this method entirely in base class. In such a case, there is no need to implement multi_test_epoch_end either.

Note

If more than one data loader exists, and they all provide test_loss, only the test_loss of the first data loader will be used by default. This default can be changed by passing the special key test_dl_idx: int inside the test_ds config.

Parameters

outputs – Single or nested list of tensor outputs from one or more data loaders.

Returns

A dictionary containing the union of all items from individual data_loaders, along with merged logs from all data loaders.

multi_validation_epoch_end(outputs: List[Dict[str, torch.Tensor]], dataloader_idx: int = 0) → Optional[Dict[str, Dict[str, torch.Tensor]]]

Adds support for multiple validation datasets. Should be overriden by subclass, so as to obtain appropriate logs for each of the dataloaders.

Parameters
  • outputs – Same as that provided by LightningModule.on_validation_epoch_end() for a single dataloader.

  • dataloader_idx – int representing the index of the dataloader.

Returns

A dictionary of values, optionally containing a sub-dict log, such that the values in the log will be pre-pended by the dataloader prefix.

multi_test_epoch_end(outputs: List[Dict[str, torch.Tensor]], dataloader_idx: int = 0) → Optional[Dict[str, Dict[str, torch.Tensor]]]

Adds support for multiple test datasets. Should be overriden by subclass, so as to obtain appropriate logs for each of the dataloaders.

Parameters
  • outputs – Same as that provided by LightningModule.on_validation_epoch_end() for a single dataloader.

  • dataloader_idx – int representing the index of the dataloader.

Returns

A dictionary of values, optionally containing a sub-dict log, such that the values in the log will be pre-pended by the dataloader prefix.

get_validation_dataloader_prefix(dataloader_idx: int = 0) → str

Get the name of one or more data loaders, which will be prepended to all logs.

Parameters

dataloader_idx – Index of the data loader.

Returns

str name of the data loader at index provided.

get_test_dataloader_prefix(dataloader_idx: int = 0) → str

Get the name of one or more data loaders, which will be prepended to all logs.

Parameters

dataloader_idx – Index of the data loader.

Returns

str name of the data loader at index provided.

load_part_of_state_dict(state_dict, include, exclude, load_from_string=None)

maybe_init_from_pretrained_checkpoint(cfg: omegaconf.OmegaConf, map_location: str = 'cpu')

Initializes a given model with the parameters obtained via specific config arguments. The state dict of the provided model will be updated with strict=False setting so as to prevent requirement of exact model parameters matching.

Initializations:

init_from_nemo_model: Str path to a .nemo model in order to load state_dict from single nemo file; if loading from multiple files, pass in a dict where the values have the following fields:

path: Str path to .nemo model

include: Optional list of strings, at least one of which needs to be contained in parameter name to be loaded from this .nemo file. Default: everything is included.

exclude: Optional list of strings, which can be used to exclude any parameter containing one of these strings from being loaded from this .nemo file. Default: nothing is excluded.

hydra usage example:

init_from_nemo_model:
model0:

path:<path/to/model1> include:[“encoder”]

model1:

path:<path/to/model2> include:[“decoder”] exclude:[“embed”]

init_from_pretrained_model: Str name of a pretrained model checkpoint (obtained via cloud).

The model will be downloaded (or a cached copy will be used), instantiated and then its state dict will be extracted. If loading from multiple models, you can pass in a dict with the same format as for init_from_nemo_model, except with “name” instead of “path”

init_from_ptl_ckpt: Str name of a Pytorch Lightning checkpoint file. It will be loaded and

the state dict will extracted. If loading from multiple files, you can pass in a dict with the same format as for init_from_nemo_model.

Parameters
  • cfg – The config used to instantiate the model. It need only contain one of the above keys.

  • map_location – str or torch.device() which represents where the intermediate state dict (from the pretrained model or checkpoint) will be loaded.

classmethod extract_state_dict_from(restore_path: str, save_dir: str, split_by_module: bool = False, save_restore_connector: Optional[nemo.core.connectors.save_restore_connector.SaveRestoreConnector] = None)

Extract the state dict(s) from a provided .nemo tarfile and save it to a directory.

Parameters
  • restore_path – path to .nemo file from which state dict(s) should be extracted

  • save_dir – directory in which the saved state dict(s) should be stored

  • split_by_module – bool flag, which determins whether the output checkpoint should be for the entire Model, or the individual module’s that comprise the Model

  • save_restore_connector (SaveRestoreConnector) – Can be overrided to add custom save and restore logic.

Example

To convert the .nemo tarfile into a single Model level PyTorch checkpoint :: state_dict = nemo.collections.asr.models.EncDecCTCModel.extract_state_dict_from(‘asr.nemo’, ‘./asr_ckpts’)

To restore a model from a Model level checkpoint :: model = nemo.collections.asr.models.EncDecCTCModel(cfg) # or any other method of restoration model.load_state_dict(torch.load(“./asr_ckpts/model_weights.ckpt”))

To convert the .nemo tarfile into multiple Module level PyTorch checkpoints :: state_dict = nemo.collections.asr.models.EncDecCTCModel.extract_state_dict_from(‘asr.nemo’, ‘./asr_ckpts’, split_by_module=True)

To restore a module from a Module level checkpoint :: model = nemo.collections.asr.models.EncDecCTCModel(cfg) # or any other method of restoration

# load the individual components model.preprocessor.load_state_dict(torch.load(“./asr_ckpts/preprocessor.ckpt”)) model.encoder.load_state_dict(torch.load(“./asr_ckpts/encoder.ckpt”)) model.decoder.load_state_dict(torch.load(“./asr_ckpts/decoder.ckpt”))

Returns

The state dict that was loaded from the original .nemo checkpoint

prepare_test(trainer: pytorch_lightning.Trainer) → bool

Helper method to check whether the model can safely be tested on a dataset after training (or loading a checkpoint).

Copy
Copied!
            

trainer = Trainer() if model.prepare_test(trainer): trainer.test(model)

Returns

bool which declares the model safe to test. Provides warnings if it has to return False to guide the user.

set_trainer(trainer: pytorch_lightning.Trainer)

Set an instance of Trainer object.

Parameters

trainer – PyTorch Lightning Trainer object.

set_world_size(trainer: pytorch_lightning.Trainer)

Determines the world size from the PyTorch Lightning Trainer. And then updates AppState.

Parameters

trainer (Trainer) – PyTorch Lightning Trainer object

summarize(max_depth: int = 1) → pytorch_lightning.utilities.model_summary.ModelSummary

Summarize this LightningModule.

Parameters

max_depth – The maximum depth of layer nesting that the summary will include. A value of 0 turns the layer summary off. Default: 1.

Returns

The model summary object

property num_weights

Utility property that returns the total number of parameters of the Model.

trainer()

property cfg

Property that holds the finalized internal config of the model.

property validation_step_outputs

Cached outputs of validation_step. It can be a list of items (for single data loader) or a list of lists (for multiple data loaders).

Returns

List of outputs of validation_step.

property test_step_outputs

Cached outputs of test_step. It can be a list of items (for single data loader) or a list of lists (for multiple data loaders).

Returns

List of outputs of test_step.

classmethod update_save_restore_connector(save_restore_connector)

on_train_start()

PyTorch Lightning hook: https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.html#on-train-start We use it here to copy the relevant config for dynamic freezing.

on_train_batch_start(batch: Any, batch_idx: int, unused: int = 0) → Optional[int]

PyTorch Lightning hook: https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.html#on-train-batch-start We use it here to enable nsys profiling and dynamic freezing.

on_train_batch_end(outputs, batch: Any, batch_idx: int, unused: int = 0) → None

PyTorch Lightning hook: https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.html#on-train-batch-end We use it here to enable nsys profiling.

on_train_end()

PyTorch Lightning hook: https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.html#on-train-end We use it here to cleanup the dynamic freezing config.

on_test_end()

PyTorch Lightning hook: https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.html#on-test-end

on_predict_end()

PyTorch Lightning hook: https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.html#on-test-end

cuda(device=None)
PTL is overriding this method and changing the pytorch behavior of a module.

The PTL LightingModule override will move the module to device 0 if device is None. See the PTL method here: https://github.com/Lightning-AI/lightning/blob/master/src/pytorch_lightning/core/mixins/device_dtype_mixin.py#L113

Here we are overriding this to maintain the default Pytorch nn.module behavior: https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/module.py#L728

Moves all model parameters and buffers to the GPU.

This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.

Note

This method modifies the module in-place.

Parameters

device (int, optional) – if specified, all parameters will be copied to that device

Returns

self

Return type

Module

class nemo.core.NeuralModule(*args: Any, **kwargs: Any)

Bases: torch.nn.Module, nemo.core.classes.common.Typing, nemo.core.classes.common.Serialization, nemo.core.classes.common.FileIO

Abstract class offering interface shared between all PyTorch Neural Modules.

property num_weights

Utility property that returns the total number of parameters of NeuralModule.

input_example(max_batch=None, max_dim=None)

Override this method if random inputs won’t work :returns: A tuple sample of valid input data.

freeze() → None

Freeze all params for inference.

unfreeze() → None

Unfreeze all parameters for training.

as_frozen()

Context manager which temporarily freezes a module, yields control and finally unfreezes the module.

class nemo.core.Typing

Bases: abc.ABC

An interface which endows module with neural types

property input_types: Optional[Dict[str, nemo.core.neural_types.neural_type.NeuralType]]

Define these to enable input neural type checks

property output_types: Optional[Dict[str, nemo.core.neural_types.neural_type.NeuralType]]

Define these to enable output neural type checks

_validate_input_types(input_types=None, ignore_collections=False, **kwargs)

This function does a few things.

  1. It ensures that len(self.input_types <non-optional>) <= len(kwargs) <= len(self.input_types).

  2. For each (keyword name, keyword value) passed as input to the wrapped function:
    • Check if the keyword name exists in the list of valid self.input_types names.

    • Check if keyword value has the neural_type property.
      • If it does, then perform a comparative check and assert that neural types

        are compatible (SAME or GREATER).

    • Check if keyword value is a container type (list or tuple). If yes,

      then perform the elementwise test of neural type above on each element of the nested structure, recursively.

Parameters
  • input_types – Either the input_types defined at class level, or the local function overridden type definition.

  • ignore_collections – For backward compatibility, container support can be disabled explicitly using this flag. When set to True, all nesting is ignored and nest-depth checks are skipped.

  • kwargs – Dictionary of argument_name:argument_value pairs passed to the wrapped function upon call.

_attach_and_validate_output_types(out_objects, ignore_collections=False, output_types=None)

This function does a few things.

  1. It ensures that len(out_object) == len(self.output_types).

  2. If the output is a tensor (or list/tuple of list/tuple … of tensors), it

    attaches a neural_type to it. For objects without the neural_type attribute, such as python objects (dictionaries and lists, primitive data types, structs), no neural_type is attached.

Note: tensor.neural_type is only checked during _validate_input_types which is called prior to forward().

Parameters
  • output_types – Either the output_types defined at class level, or the local function overridden type definition.

  • ignore_collections – For backward compatibility, container support can be disabled explicitly using this flag. When set to True, all nesting is ignored and nest-depth checks are skipped.

  • out_objects – The outputs of the wrapped function.

__check_neural_type(obj, metadata: nemo.core.classes.common.TypecheckMetadata, depth: int, name: Optional[str] = None)

Recursively tests whether the obj satisfies the semantic neural type assertion. Can include shape checks if shape information is provided.

Parameters
  • obj – Any python object that can be assigned a value.

  • metadata – TypecheckMetadata object.

  • depth – Current depth of recursion.

  • name – Optional name used of the source obj, used when an error occurs.

__attach_neural_type(obj, metadata: nemo.core.classes.common.TypecheckMetadata, depth: int, name: Optional[str] = None)

Recursively attach neural types to a given object - as long as it can be assigned some value.

Parameters
  • obj – Any python object that can be assigned a value.

  • metadata – TypecheckMetadata object.

  • depth – Current depth of recursion.

  • name – Optional name used of the source obj, used when an error occurs.


class nemo.core.Serialization

Bases: abc.ABC

classmethod from_config_dict(config: DictConfig, trainer: Optional[Trainer] = None)

Instantiates object using DictConfig-based configuration

to_config_dict() → omegaconf.DictConfig

Returns object’s configuration to config dictionary


class nemo.core.FileIO

Bases: abc.ABC

save_to(save_path: str)

Standardized method to save a tarfile containing the checkpoint, config, and any additional artifacts. Implemented via nemo.core.connectors.save_restore_connector.SaveRestoreConnector.save_to().

Parameters

save_path – str, path to where the file should be saved.

classmethod restore_from(restore_path: str, override_config_path: Optional[str] = None, map_location: Optional[torch.device] = None, strict: bool = True, return_config: bool = False, trainer: Optional[Trainer] = None, save_restore_connector: nemo.core.connectors.save_restore_connector.SaveRestoreConnector = None)

Restores model instance (weights and configuration) from a .nemo file

Parameters
  • restore_path – path to .nemo file from which model should be instantiated

  • override_config_path – path to a yaml config that will override the internal config file or an OmegaConf / DictConfig object representing the model config.

  • map_location – Optional torch.device() to map the instantiated model to a device. By default (None), it will select a GPU if available, falling back to CPU otherwise.

  • strict – Passed to load_state_dict. By default True

  • return_config – If set to true, will return just the underlying config of the restored model as an OmegaConf DictConfig object without instantiating the model.

  • trainer – An optional Trainer object, passed to the model constructor.

  • save_restore_connector – An optional SaveRestoreConnector object that defines the implementation of the restore_from() method.

classmethod from_config_file(path2yaml_file: str)

Instantiates an instance of NeMo Model from YAML config file. Weights will be initialized randomly. :param path2yaml_file: path to yaml file with model configuration

Returns:

to_config_file(path2yaml_file: str)

Saves current instance’s configuration to YAML config file. Weights will not be saved. :param path2yaml_file: path2yaml_file: path to yaml file where model model configuration will be saved

Returns:

class nemo.core.connectors.save_restore_connector.SaveRestoreConnector

Bases: object

save_to(model: nemo.core.classes.modelPT.ModelPT, save_path: str)

Saves model instance (weights and configuration) into .nemo file. You can use “restore_from” method to fully restore instance from .nemo file.

.nemo file is an archive (tar.gz) with the following:

model_config.yaml - model configuration in .yaml format. You can deserialize this into cfg argument for model’s constructor model_wights.ckpt - model checkpoint

Parameters
  • model – ModelPT object to be saved.

  • save_path – Path to .nemo file where model instance should be saved

Returns

Path to .nemo file where model instance was saved (same as save_path argument) or None if not rank 0

The path can be a directory if the flag pack_nemo_file is set to False.

Return type

str

load_config_and_state_dict(calling_cls, restore_path: str, override_config_path: Optional[Union[omegaconf.OmegaConf, str]] = None, map_location: Optional[torch.device] = None, strict: bool = True, return_config: bool = False, trainer: Optional[pytorch_lightning.trainer.trainer.Trainer] = None)

Restores model instance (weights and configuration) into .nemo file

Parameters
  • restore_path – path to .nemo file from which model should be instantiated

  • override_config_path – path to a yaml config that will override the internal config file or an OmegaConf / DictConfig object representing the model config.

  • map_location – Optional torch.device() to map the instantiated model to a device. By default (None), it will select a GPU if available, falling back to CPU otherwise.

  • strict – Passed to load_state_dict. By default True

  • return_config – If set to true, will return just the underlying config of the restored model as an OmegaConf DictConfig object without instantiating the model.

Example

` model = nemo.collections.asr.models.EncDecCTCModel.restore_from('asr.nemo') assert isinstance(model, nemo.collections.asr.models.EncDecCTCModel) `

Returns

An instance of type cls or its underlying config (if return_config is set).

modify_state_dict(conf, state_dict)

Utility method that allows to modify the state dict before loading parameters into a model. :param conf: A model level OmegaConf object. :param state_dict: The state dict restored from the checkpoint.

Returns

A potentially modified state dict.

load_instance_with_state_dict(instance, state_dict, strict)

Utility method that loads a model instance with the (potentially modified) state dict.

Parameters
  • instance – ModelPT subclass instance.

  • state_dict – The state dict (which may have been modified)

  • strict – Bool, whether to perform strict checks when loading the state dict.

restore_from(calling_cls, restore_path: str, override_config_path: Optional[Union[omegaconf.OmegaConf, str]] = None, map_location: Optional[torch.device] = None, strict: bool = True, return_config: bool = False, trainer: Optional[pytorch_lightning.trainer.trainer.Trainer] = None)

Restores model instance (weights and configuration) into .nemo file

Parameters
  • restore_path – path to .nemo file from which model should be instantiated

  • override_config_path – path to a yaml config that will override the internal config file or an OmegaConf / DictConfig object representing the model config.

  • map_location – Optional torch.device() to map the instantiated model to a device. By default (None), it will select a GPU if available, falling back to CPU otherwise.

  • strict – Passed to load_state_dict. By default True

  • return_config – If set to true, will return just the underlying config of the restored model as an OmegaConf DictConfig object without instantiating the model.

  • trainer – An optional Trainer object, passed to the model constructor.

Example

` model = nemo.collections.asr.models.EncDecCTCModel.restore_from('asr.nemo') assert isinstance(model, nemo.collections.asr.models.EncDecCTCModel) `

Returns

An instance of type cls or its underlying config (if return_config is set).

extract_state_dict_from(restore_path: str, save_dir: str, split_by_module: bool = False)

Extract the state dict(s) from a provided .nemo tarfile and save it to a directory.

Parameters
  • restore_path – path to .nemo file from which state dict(s) should be extracted

  • save_dir – directory in which the saved state dict(s) should be stored

  • split_by_module – bool flag, which determins whether the output checkpoint should be for the entire Model, or the individual module’s that comprise the Model

Example

To convert the .nemo tarfile into a single Model level PyTorch checkpoint :: state_dict = nemo.collections.asr.models.EncDecCTCModel.extract_state_dict_from(‘asr.nemo’, ‘./asr_ckpts’)

To restore a model from a Model level checkpoint :: model = nemo.collections.asr.models.EncDecCTCModel(cfg) # or any other method of restoration model.load_state_dict(torch.load(“./asr_ckpts/model_weights.ckpt”))

To convert the .nemo tarfile into multiple Module level PyTorch checkpoints :: state_dict = nemo.collections.asr.models.EncDecCTCModel.extract_state_dict_from(‘asr.nemo’, ‘./asr_ckpts’, split_by_module=True)

To restore a module from a Module level checkpoint :: model = nemo.collections.asr.models.EncDecCTCModel(cfg) # or any other method of restoration

# load the individual components model.preprocessor.load_state_dict(torch.load(“./asr_ckpts/preprocessor.ckpt”)) model.encoder.load_state_dict(torch.load(“./asr_ckpts/encoder.ckpt”)) model.decoder.load_state_dict(torch.load(“./asr_ckpts/decoder.ckpt”))

Returns

The state dict that was loaded from the original .nemo checkpoint

register_artifact(model, config_path: str, src: str, verify_src_exists: bool = True)

Register model artifacts with this function. These artifacts (files) will be included inside .nemo file when model.save_to(“mymodel.nemo”) is called.

How it works:

  1. It always returns existing absolute path which can be used during Model constructor call

    EXCEPTION: src is None or “” in which case nothing will be done and src will be returned

  2. It will add (config_path, model_utils.ArtifactItem()) pair to self.artifacts

    Copy
    Copied!
                

    If "src" is local existing path: then it will be returned in absolute path form elif "src" starts with "nemo_file:unique_artifact_name": .nemo will be untarred to a temporary folder location and an actual existing path will be returned else: an error will be raised.


WARNING: use .register_artifact calls in your models’ constructors. The returned path is not guaranteed to exist after you have exited your model’s constructor.

Parameters
  • model – ModelPT object to register artifact for.

  • config_path (str) – Artifact key. Usually corresponds to the model config.

  • src (str) – Path to artifact.

  • verify_src_exists (bool) – If set to False, then the artifact is optional and register_artifact will return None even if src is not found. Defaults to True.

Returns

If src is not None or empty it always returns absolute path which is guaranteed to exists during model

instance life

Return type

str

class nemo.core.classes.mixins.access_mixins.AccessMixin

Bases: abc.ABC

Allows access to output of intermediate layers of a model

register_accessible_tensor(name, tensor)

Register tensor for later use.

classmethod get_module_registry(module: torch.nn.Module)

Extract all registries from named submodules, return dictionary where the keys are the flattened module names, the values are the internal registry of each such module.

reset_registry(registry_key: Optional[str] = None)

Reset the registries of all named sub-modules

property access_cfg

Returns: The global access config shared across all access mixin modules.


class nemo.core.classes.mixins.hf_io_mixin.HuggingFaceFileIO

Bases: abc.ABC

Mixin that provides Hugging Face file IO functionality for NeMo models. It is usually implemented as a mixin to ModelPT.

This mixin provides the following functionality: - search_huggingface_models(): Search the hub programmatically via some model filter. - push_to_hf_hub(): Push a model to the hub.

classmethod get_hf_model_filter() → huggingface_hub.ModelFilter

Generates a filter for HuggingFace models.

Additionally includes default values of some metadata about results returned by the Hub.

Metadata:

resolve_card_info: Bool flag, if set, returns the model card metadata. Default: False. limit_results: Optional int, limits the number of results returned.

Returns

A Hugging Face Hub ModelFilter object.

classmethod search_huggingface_models(model_filter: Optional[Union[huggingface_hub.ModelFilter, List[huggingface_hub.ModelFilter]]] = None) → List[huggingface_hub.hf_api.ModelInfo]

Should list all pre-trained models available via Hugging Face Hub.

The following metadata can be passed via the model_filter for additional results. Metadata:

resolve_card_info: Bool flag, if set, returns the model card metadata. Default: False.

limit_results: Optional int, limits the number of results returned.

Copy
Copied!
            

# You can replace <DomainSubclass> with any subclass of ModelPT. from nemo.core import ModelPT # Get default ModelFilter filt = <DomainSubclass>.get_hf_model_filter() # Make any modifications to the filter as necessary filt.language = [...] filt.task = ... filt.tags = [...] # Add any metadata to the filter as needed filt.limit_results = 5 # Obtain model info model_infos = <DomainSubclass>.search_huggingface_models(model_filter=filt) # Browse through cards and select an appropriate one card = model_infos[0] # Restore model using `modelId` of the card. model = ModelPT.from_pretrained(card.modelId)

Parameters

model_filter – Optional ModelFilter or List[ModelFilter] (from Hugging Face Hub) that filters the returned list of compatible model cards, and selects all results from each filter. Users can then use model_card.modelId in from_pretrained() to restore a NeMo Model. If no ModelFilter is provided, uses the classes default filter as defined by get_hf_model_filter().

Returns

A list of ModelInfo entries.

push_to_hf_hub(repo_id: str, *, pack_nemo_file: bool = True, model_card: Union[huggingface_hub.ModelCard, None, object, str] = None, commit_message: str = 'Push model using huggingface_hub.', private: bool = False, api_endpoint: Optional[str] = None, token: Optional[str] = None, branch: Optional[str] = None, allow_patterns: Optional[Union[List[str], str]] = None, ignore_patterns: Optional[Union[List[str], str]] = None, delete_patterns: Optional[Union[List[str], str]] = None)

Upload model checkpoint to the Hub.

Use allow_patterns and ignore_patterns to precisely filter which files should be pushed to the hub. Use delete_patterns to delete existing remote files in the same commit. See [upload_folder] reference for more details.

Parameters
  • repo_id (str) – ID of the repository to push to (example: “username/my-model”).

  • pack_nemo_file (bool, optional, defaults to True) – Whether to pack the model checkpoint and configuration into a single .nemo file. If set to false, uploads the contents of the directory containing the model checkpoint and configuration plus additional artifacts.

  • model_card (ModelCard, optional) – Model card to upload with the model. If None, will use the model card template provided by the class itself via generate_model_card(). Any object that implements str(obj) can be passed here. Two keyword replacements are passed to generate_model_card(): model_name and repo_id. If the model card generates a string, and it contains {model_name} or {repo_id}, they will be replaced with the actual values.

  • commit_message (str, optional) – Message to commit while pushing.

  • private (bool, optional, defaults to False) – Whether the repository created should be private.

  • api_endpoint (str, optional) – The API endpoint to use when pushing the model to the hub.

  • token (str, optional) – The token to use as HTTP bearer authorization for remote files. By default, it will use the token cached when running huggingface-cli login.

  • branch (str, optional) – The git branch on which to push the model. This defaults to “main”.

  • allow_patterns (List[str] or str, optional) – If provided, only files matching at least one pattern are pushed.

  • ignore_patterns (List[str] or str, optional) – If provided, files matching any of the patterns are not pushed.

  • delete_patterns (List[str] or str, optional) – If provided, remote files matching any of the patterns will be deleted from the repo.

Returns

The url of the uploaded HF repo.

class nemo.core.classes.common.typecheck(input_types: Union[nemo.core.classes.common.typecheck.TypeState, Dict[str, nemo.core.neural_types.neural_type.NeuralType]] = TypeState.UNINITIALIZED, output_types: Union[nemo.core.classes.common.typecheck.TypeState, Dict[str, nemo.core.neural_types.neural_type.NeuralType]] = TypeState.UNINITIALIZED, ignore_collections: bool = False)

Bases: object

A decorator which performs input-output neural type checks, and attaches neural types to the output of the function that it wraps.

Requires that the class inherit from Typing in order to perform type checking, and will raise an error if that is not the case.

# Usage (Class level type support)

Copy
Copied!
            

@typecheck() def fn(self, arg1, arg2, ...): ...

# Usage (Function level type support)

Copy
Copied!
            

@typecheck(input_types=..., output_types=...) def fn(self, arg1, arg2, ...): ...

Points to be noted:

  1. The brackets () in @typecheck() are necessary.

    You will encounter a TypeError: __init__() takes 1 positional argument but X were given without those brackets.

  2. The function can take any number of positional arguments during definition.

    When you call this function, all arguments must be passed using kwargs only.

__call__(wrapped, instance: nemo.core.classes.common.Typing, args, kwargs)

Wrapper method that can be used on any function of a class that implements Typing. By default, it will utilize the input_types and output_types properties of the class inheriting Typing.

Local function level overrides can be provided by supplying dictionaries as arguments to the decorator.

Parameters
  • input_types – Union[TypeState, Dict[str, NeuralType]]. By default, uses the global input_types.

  • output_types – Union[TypeState, Dict[str, NeuralType]]. By default, uses the global output_types.

  • ignore_collections – Bool. Determines if container types should be asserted for depth checks, or if depth checks are skipped entirely.

class TypeState(value)

Bases: enum.Enum

Placeholder to denote the default value of type information provided. If the constructor of this decorator is used to override the class level type definition, this enum value indicate that types will be overridden.

static set_typecheck_enabled(enabled: bool = True)

Global method to enable/disable typechecking.

Parameters

enabled – bool, when True will enable typechecking.

static disable_checks()

Context manager that temporarily disables type checking within its context.

static set_semantic_check_enabled(enabled: bool = True)

Global method to enable/disable semantic typechecking.

Parameters

enabled – bool, when True will enable semantic typechecking.

static disable_semantic_checks()

Context manager that temporarily disables semantic type checking within its context.

class nemo.core.neural_types.NeuralType(axes: Optional[Any] = None, elements_type: Optional[Any] = None, optional: bool = False)

Bases: object

This is the main class which would represent neural type concept. It is used to represent the types of inputs and outputs.

Parameters
  • axes (Optional[Tuple]) – a tuple of AxisTypes objects representing the semantics of what varying each axis means You can use a short, string-based form here. For example: (‘B’, ‘C’, ‘H’, ‘W’) would correspond to an NCHW format frequently used in computer vision. (‘B’, ‘T’, ‘D’) is frequently used for signal processing and means [batch, time, dimension/channel].

  • elements_type (ElementType) – an instance of ElementType class representing the semantics of what is stored inside the tensor. For example: logits (LogitsType), log probabilities (LogprobType), etc.

  • optional (bool) – By default, this is false. If set to True, it would means that input to the port of this type can be optional.

compare(second)nemo.core.neural_types.comparison.NeuralTypeComparisonResult

Performs neural type comparison of self with second. When you chain two modules’ inputs/outputs via __call__ method, this comparison will be called to ensure neural type compatibility.

compare_and_raise_error(parent_type_name, port_name, second_object)

Method compares definition of one type with another and raises an error if not compatible.


class nemo.core.neural_types.axes.AxisType(kind: nemo.core.neural_types.axes.AxisKindAbstract, size: Optional[int] = None, is_list=False)

Bases: object

This class represents axis semantics and (optionally) it’s dimensionality :param kind: what kind of axis it is? For example Batch, Height, etc. :type kind: AxisKindAbstract :param size: specify if the axis should have a fixed size. By default it is set to None and you :type size: int, optional :param typically do not want to set it for Batch and Time: :param is_list: whether this is a list or a tensor axis :type is_list: bool, default=False


class nemo.core.neural_types.elements.ElementType

Bases: abc.ABC

Abstract class defining semantics of the tensor elements. We are relying on Python for inheritance checking

property type_parameters: Dict[str, Any]

Override this property to parametrize your type. For example, you can specify ‘storage’ type such as float, int, bool with ‘dtype’ keyword. Another example, is if you want to represent a signal with a particular property (say, sample frequency), then you can put sample_freq->value in there. When two types are compared their type_parameters must match.

property fields

This should be used to logically represent tuples/structures. For example, if you want to represent a bounding box (x, y, width, height) you can put a tuple with names (‘x’, y’, ‘w’, ‘h’) in here. Under the hood this should be converted to the last tesnor dimension of fixed size = len(fields). When two types are compared their fields must match.


class nemo.core.neural_types.comparison.NeuralTypeComparisonResult(value)

Bases: enum.Enum

The result of comparing two neural type objects for compatibility. When comparing A.compare_to(B):

class nemo.utils.exp_manager.exp_manager(trainer: pytorch_lightning.Trainer, cfg: Optional[Union[omegaconf.DictConfig, Dict]] = None)

Bases:

exp_manager is a helper function used to manage folders for experiments. It follows the pytorch lightning paradigm of exp_dir/model_or_experiment_name/version. If the lightning trainer has a logger, exp_manager will get exp_dir, name, and version from the logger. Otherwise it will use the exp_dir and name arguments to create the logging directory. exp_manager also allows for explicit folder creation via explicit_log_dir.

The version can be a datetime string or an integer. Datestime version can be disabled if use_datetime_version is set to False. It optionally creates TensorBoardLogger, WandBLogger, DLLogger, MLFlowLogger, ClearMLLogger, ModelCheckpoint objects from pytorch lightning. It copies sys.argv, and git information if available to the logging directory. It creates a log file for each process to log their output into.

exp_manager additionally has a resume feature (resume_if_exists) which can be used to continuing training from the constructed log_dir. When you need to continue the training repeatedly (like on a cluster which you need multiple consecutive jobs), you need to avoid creating the version folders. Therefore from v1.0.0, when resume_if_exists is set to True, creating the version folders is ignored.

Parameters
  • trainer (pytorch_lightning.Trainer) – The lightning trainer.

  • cfg (DictConfig, dict) –

    Can have the following keys:

    • explicit_log_dir (str, Path): Can be used to override exp_dir/name/version folder creation. Defaults to

      None, which will use exp_dir, name, and version to construct the logging directory.

    • exp_dir (str, Path): The base directory to create the logging directory. Defaults to None, which logs to

      ./nemo_experiments.

    • name (str): The name of the experiment. Defaults to None which turns into “default” via name = name or

      ”default”.

    • version (str): The version of the experiment. Defaults to None which uses either a datetime string or

      lightning’s TensorboardLogger system of using version_{int}.

    • use_datetime_version (bool): Whether to use a datetime string for version. Defaults to True.

    • resume_if_exists (bool): Whether this experiment is resuming from a previous run. If True, it sets

      trainer._checkpoint_connector._ckpt_path so that the trainer should auto-resume. exp_manager will move files under log_dir to log_dir/run_{int}. Defaults to False. From v1.0.0, when resume_if_exists is True, we would not create version folders to make it easier to find the log folder for next runs.

    • resume_past_end (bool): exp_manager errors out if resume_if_exists is True and a checkpoint matching

      *end.ckpt indicating a previous training run fully completed. This behaviour can be disabled, in which case the *end.ckpt will be loaded by setting resume_past_end to True. Defaults to False.

    • resume_ignore_no_checkpoint (bool): exp_manager errors out if resume_if_exists is True and no checkpoint

      could be found. This behaviour can be disabled, in which case exp_manager will print a message and continue without restoring, by setting resume_ignore_no_checkpoint to True. Defaults to False.

    • resume_from_checkpoint (str): Can be used to specify a path to a specific checkpoint file to load from. This will

      override any checkpoint found when resume_if_exists is True. Defaults to None.

    • create_tensorboard_logger (bool): Whether to create a tensorboard logger and attach it to the pytorch

      lightning trainer. Defaults to True.

    • summary_writer_kwargs (dict): A dictionary of kwargs that can be passed to lightning’s TensorboardLogger

      class. Note that log_dir is passed by exp_manager and cannot exist in this dict. Defaults to None.

    • create_wandb_logger (bool): Whether to create a Weights and Baises logger and attach it to the pytorch

      lightning trainer. Defaults to False.

    • wandb_logger_kwargs (dict): A dictionary of kwargs that can be passed to lightning’s WandBLogger

      class. Note that name and project are required parameters if create_wandb_logger is True. Defaults to None.

    • create_mlflow_logger (bool): Whether to create an MLFlow logger and attach it to the pytorch lightning

      training. Defaults to False

    • mlflow_logger_kwargs (dict): optional parameters for the MLFlow logger

    • create_dllogger_logger (bool): Whether to create an DLLogger logger and attach it to the pytorch lightning

      training. Defaults to False

    • dllogger_logger_kwargs (dict): optional parameters for the DLLogger logger

    • create_clearml_logger (bool): Whether to create an ClearML logger and attach it to the pytorch lightning

      training. Defaults to False

    • clearml_logger_kwargs (dict): optional parameters for the ClearML logger

    • create_checkpoint_callback (bool): Whether to create a ModelCheckpoint callback and attach it to the

      pytorch lightning trainer. The ModelCheckpoint saves the top 3 models with the best “val_loss”, the most recent checkpoint under *last.ckpt, and the final checkpoint after training completes under *end.ckpt. Defaults to True.

    • create_early_stopping_callback (bool): Flag to decide if early stopping should be used to stop training. Default is False.

    See EarlyStoppingParams dataclass above.

    • create_preemption_callback (bool): Flag to decide whether to enable preemption callback to save checkpoints and exit training

    immediately upon preemption. Default is True.

    • files_to_copy (list): A list of files to copy to the experiment logging directory. Defaults to None which

      copies no files.

    • log_local_rank_0_only (bool): Whether to only create log files for local rank 0. Defaults to False.

      Set this to True if you are using DDP with many GPUs and do not want many log files in your exp dir.

    • log_global_rank_0_only (bool): Whether to only create log files for global rank 0. Defaults to False.

      Set this to True if you are using DDP with many GPUs and do not want many log files in your exp dir.

    • max_time (str): The maximum wall clock time per run. This is intended to be used on clusters where you want

      a checkpoint to be saved after this specified time and be able to resume from that checkpoint. Defaults to None.

    • seconds_to_sleep (float): seconds to sleep non rank 0 processes for. Used to give enough time for rank 0 to initialize

Returns

The final logging directory where logging files are saved. Usually the concatenation of

exp_dir, name, and version.

Return type

log_dir (Path)

class nemo.utils.exp_manager.ExpManagerConfig(explicit_log_dir: typing.Optional[str] = None, exp_dir: typing.Optional[str] = None, name: typing.Optional[str] = None, version: typing.Optional[str] = None, use_datetime_version: typing.Optional[bool] = True, resume_if_exists: typing.Optional[bool] = False, resume_past_end: typing.Optional[bool] = False, resume_ignore_no_checkpoint: typing.Optional[bool] = False, resume_from_checkpoint: typing.Optional[str] = None, create_tensorboard_logger: typing.Optional[bool] = True, summary_writer_kwargs: typing.Optional[typing.Dict[typing.Any, typing.Any]] = None, create_wandb_logger: typing.Optional[bool] = False, wandb_logger_kwargs: typing.Optional[typing.Dict[typing.Any, typing.Any]] = None, create_mlflow_logger: typing.Optional[bool] = False, mlflow_logger_kwargs: typing.Optional[nemo.utils.loggers.mlflow_logger.MLFlowParams] = <factory>, create_dllogger_logger: typing.Optional[bool] = False, dllogger_logger_kwargs: typing.Optional[nemo.utils.loggers.dllogger.DLLoggerParams] = <factory>, create_clearml_logger: typing.Optional[bool] = False, clearml_logger_kwargs: typing.Optional[nemo.utils.loggers.clearml_logger.ClearMLParams] = <factory>, create_neptune_logger: typing.Optional[bool] = False, neptune_logger_kwargs: typing.Optional[typing.Dict[typing.Any, typing.Any]] = None, create_checkpoint_callback: typing.Optional[bool] = True, checkpoint_callback_params: typing.Optional[nemo.utils.exp_manager.CallbackParams] = <factory>, create_early_stopping_callback: typing.Optional[bool] = False, early_stopping_callback_params: typing.Optional[nemo.utils.exp_manager.EarlyStoppingParams] = <factory>, create_preemption_callback: typing.Optional[bool] = True, files_to_copy: typing.Optional[typing.List[str]] = None, log_step_timing: typing.Optional[bool] = True, step_timing_kwargs: typing.Optional[nemo.utils.exp_manager.StepTimingParams] = <factory>, log_local_rank_0_only: typing.Optional[bool] = False, log_global_rank_0_only: typing.Optional[bool] = False, disable_validation_on_resume: typing.Optional[bool] = True, ema: typing.Optional[nemo.utils.exp_manager.EMAParams] = <factory>, max_time_per_run: typing.Optional[str] = None, seconds_to_sleep: float = 5)

Bases: object

Experiment Manager config for validation of passed arguments.

class nemo.core.classes.exportable.Exportable

Bases: abc.ABC

This Interface should be implemented by particular classes derived from nemo.core.NeuralModule or nemo.core.ModelPT. It gives these entities ability to be exported for deployment to formats such as ONNX.

Usage:

# exporting pre-trained model to ONNX file for deployment. model.eval() model.to(‘cuda’) # or to(‘cpu’) if you don’t have GPU

model.export(‘mymodel.onnx’, [options]) # all arguments apart from output are optional.

export(output: str, input_example=None, verbose=False, do_constant_folding=True, onnx_opset_version=None, check_trace: Union[bool, List[torch.Tensor]] = False, dynamic_axes=None, check_tolerance=0.01, export_modules_as_functions=False, keep_initializers_as_inputs=None)

Exports the model to the specified format. The format is inferred from the file extension of the output file.

Parameters
  • output (str) – Output file name. File extension be .onnx, .pt, or .ts, and is used to select export path of the model.

  • input_example (list or dict) – Example input to the model’s forward function. This is used to trace the model and export it to ONNX/TorchScript. If the model takes multiple inputs, then input_example should be a list of input examples. If the model takes named inputs, then input_example should be a dictionary of input examples.

  • verbose (bool) – If True, will print out a detailed description of the model’s export steps, along with the internal trace logs of the export process.

  • do_constant_folding (bool) – If True, will execute constant folding optimization on the model’s graph before exporting. This is ONNX specific.

  • onnx_opset_version (int) – The ONNX opset version to export the model to. If None, will use a reasonable default version.

  • check_trace (bool) – If True, will verify that the model’s output matches the output of the traced model, upto some tolerance.

  • dynamic_axes (dict) – A dictionary mapping input and output names to their dynamic axes. This is used to specify the dynamic axes of the model’s inputs and outputs. If the model takes multiple inputs, then dynamic_axes should be a list of dictionaries. If the model takes named inputs, then dynamic_axes should be a dictionary of dictionaries. If None, will use the dynamic axes of the input_example derived from the NeuralType of the input and output of the model.

  • check_tolerance (float) – The tolerance to use when checking the model’s output against the traced model’s output. This is only used if check_trace is True. Note the high tolerance is used because the traced model is not guaranteed to be 100% accurate.

  • export_modules_as_functions (bool) – If True, will export the model’s submodules as functions. This is ONNX specific.

  • keep_initializers_as_inputs (bool) – If True, will keep the model’s initializers as inputs in the onnx graph. This is ONNX specific.

Returns

A tuple of two outputs. Item 0 in the output is a list of outputs, the outputs of each subnet exported. Item 1 in the output is a list of string descriptions. The description of each subnet exported can be used for logging purposes.

property disabled_deployment_input_names: List[str]

Implement this method to return a set of input names disabled for export

property disabled_deployment_output_names: List[str]

Implement this method to return a set of output names disabled for export

property supported_export_formats: List[nemo.utils.export_utils.ExportFormat]

Implement this method to return a set of export formats supported. Default is all types.

get_export_subnet(subnet=None)

Returns Exportable subnet model/module to export

list_export_subnets()

Returns default set of subnet names exported for this model First goes the one receiving input (input_example)

get_export_config()

Returns export_config dictionary

set_export_config(args)

Sets/updates export_config dictionary

Previous Adapters API
Next Community Model Converter User Guide
© Copyright 2023-2024, NVIDIA. Last updated on Apr 12, 2024.