Checkpoints

There are two main ways to load pretrained checkpoints in NeMo:

  • Using the restore_from() method to load a local checkpoint file (.nemo), or

  • Using the from_pretrained() method to download and set up a checkpoint from NGC.

See the following sections for instructions and examples for each.

Note that these instructions are for loading fully trained checkpoints for evaluation or fine-tuning. For resuming an unfinished training experiment, please use the experiment manager to do so by setting the resume_if_exists flag to True.

NeMo will automatically save checkpoints of a model you are training in a .nemo format. You can also manually save your models at any point using model.save_to(<checkpoint_path>.nemo).

If you have a local .nemo checkpoint that you’d like to load, simply use the restore_from() method:

Copy
Copied!
            

import nemo.collections.asr as nemo_asr model = nemo_asr.models.<MODEL_BASE_CLASS>.restore_from(restore_path="<path/to/checkpoint/file.nemo>")

Where the model base class is the ASR model class of the original checkpoint, or the general ASRModel class.

The audio files should be 16KHz monochannel wav files.

Transcribe Audios to Semantics:

You may perform inference on a sample of speech after loading the model by using its ‘transcribe()’ method:

Copy
Copied!
            

slu_model = nemo_asr.models.SLUIntentSlotBPEModel.from_pretrained(model_name="<MODEL_NAME>") predictions = slu_model.transcribe([list of audio files], batch_size="<BATCH_SIZE>")

Below is a list of all the Speech Intent Classification and Slot Filling models that are available in NeMo.

Model Name

Model Base Class

Model Card

slu_conformer_transformer_large_slurp SLUIntentSlotBPEModel https://ngc.nvidia.com/catalog/models/nvidia:nemo:slu_conformer_transformer_large_slurp
Previous Datasets
Next NeMo Speech Intent Classification and Slot Filling Configuration Files
© Copyright 2023-2024, NVIDIA. Last updated on Apr 12, 2024.