How to Deploy a custom TTS Models (FastPitch and HiFi-GAN) trained with TAO Toolkit Riva#

This tutorial walks you through the steps to deploy custom TTS models (FastPitch and HiFiGAN) trained with TAO Toolkit on RIVA for real-time inference.

NVIDIA Riva Overview#

NVIDIA Riva is a GPU-accelerated SDK for building Speech AI applications that are customized for your use case and deliver real-time performance.
Riva offers a rich set of speech and natural language understanding services such as:

  • Automated speech recognition (ASR)

  • Text-to-Speech synthesis (TTS)

  • A collection of natural language processing (NLP) services, such as named entity recognition (NER), punctuation, and intent classification.

In this tutorial, we will deploy Riva TTS models (FastPitch and HiFiGAN) trained with TAO Toolkit on RIVA. To understand the basics of Riva TTS APIs, refer to How do I use Riva TTS APIs with out-of-the-box models?.

For more information about Riva, refer to the Riva developer documentation.

Train, Adapt and Optimize (TAO) Toolkit#

Train Adapt Optimize (TAO) Toolkit provides the capability to export your model in a format that can deployed using NVIDIA Riva, a highly performant application framework for multi-modal conversational AI services using GPUs.

This tutorial explores taking 2 .riva models, the result of tao spectro_gen and tao vocoder commands (finetune notebook), and leveraging the Riva ServiceMaker framework to aggregate all the necessary artifacts for Riva deployment to a target environment. Once the model is deployed in Riva, you can issue inference requests to the server. We will demonstrate how quick and straightforward this whole process is.

In this notebook, you will learn how to:

  • Use Riva ServiceMaker to take TAO exported .riva files and convert it to .rmir

  • Deploy the model(s) locally on the Riva server

  • Send inference requests from a demo client using Riva API bindings


Speech generation with Riva TTS APIs#

The Riva TTS service is based on a two-stage pipeline: Riva first generates a mel spectrogram using the first model (FastPitch), then generates speech using the second model (HiFiGAN). This pipeline forms a text-to-speech system that enables you to synthesize natural sounding speech from raw transcripts without any additional information such as patterns or rhythms of speech.

Refer to the Riva TTS documentation for more information.

Requirements and setup#

Before we get started, ensure you have:

  • Installed docker-ce to instantiate the riva docker containers

  • access to NVIDIA NGC and are able to download the Riva Quick Start resources.

  • a .riva model file that you want to deploy. You can obtain this from tao <task> export (with export_format=RIVA). For more information on training and exporting a .riva model, refer to the Speech Synthesis using TAO Toolkit.

  • Install numpy by running the cell below

! pip install numpy

Riva ServiceMaker#

Riva ServiceMaker is a set of tools that aggregates all the necessary artifacts (models, files, configurations, and user settings) for Riva deployment to a target environment. It has two main components:

  • riva-build

  • riva-deploy

Riva-build#

This step helps build a Riva-ready version of the model. It’s only output is an intermediate format (called an RMIR) of an end-to-end pipeline for the supported services within Riva. Let’s consider two TTS models.

riva-build is responsible for the combination of one or more exported models (.riva files) into a single file containing an intermediate format called Riva Model Intermediate Representation (.rmir). This file contains a deployment-agnostic specification of the whole end-to-end pipeline along with all the assets required for the final deployment and inference. For more information, refer to the documentation.

# Important: Update these paths to point to the RIVA ServiceMaker docker and input models.

# ServiceMaker Docker
RIVA_SM_CONTAINER = "<add container name>"

# Directory where the .riva models are stored $MODEL_LOC/*.riva
# Both the FastPitch_22k_LJS.riva and HifiGAN_22k_LJS.riva models should be present
MODEL_LOC = "<add path to model location>"

# Name of the .riva file
SPECTRO_GEN_MODEL_NAME = "<add model name>"
VOCODER_MODEL_NAME = "<add model name>"

# Key that model is encrypted with, while exporting with TAO
KEY = "<add encryption key used for trained model>"
# Get the ServiceMaker docker
! docker pull $RIVA_SM_CONTAINER
# Syntax: riva-build <task-name> output-dir-for-rmir/model.rmir:key dir-for-riva/model.riva:key
! docker run --rm --gpus 0 -v $MODEL_LOC:/data $RIVA_SM_CONTAINER -- \
            riva-build speech_synthesis /data/tts.rmir:$KEY /data/$SPECTRO_GEN_MODEL_NAME:$KEY /data/$VOCODER_MODEL_NAME:$KEY

Riva-deploy#

The deployment tool takes as input one or more Riva Model Intermediate Representation (RMIR) files and a target model repository directory. It creates an ensemble configuration specifying the pipeline for the execution and finally writes all those assets to the output model repository directory.

For the purpose of this tutorial, we will only be using the riva-build component.


Start the Riva Server#

Once the model repository is generated, we are ready to start the Riva server. From this step onwards you need to download the Riva Quick Start resource from NGC. Please follow the instructions [here] to download the Quick Start resource.

# Set the Riva Quick Start directory
RIVA_DIR = "<Path to the uncompressed folder downloaded from quickstart(include the folder name)>"

Next, we modify the config.sh file to enable relevant Riva services (TTS for the FastPitch/HiFi-GAN models), provide the encryption key, and path to the model repository (riva_model_loc) generated in the previous step among other configurations.

For example, if above the model repository is generated at $MODEL_LOC/models, then you can specify riva_model_loc as the same directory as MODEL_LOC.

Pretrained versions of models specified in models_asr/nlp/tts are fetched from NGC. Since we are using our custom model, we can comment it in models_tts (and any others that are not relevant to your use case).

config.sh snippet#

# Enable or Disable Riva Services 
service_enabled_asr=false                                                      ## MAKE CHANGES HERE
service_enabled_nlp=false                                                      ## MAKE CHANGES HERE
service_enabled_tts=true                                                     ## MAKE CHANGES HERE

# Specify one or more GPUs to use
# specifying more than one GPU is currently an experimental feature, and may result in undefined behaviours.
gpus_to_use="device=0"

# Specify the encryption key to use to deploy models
MODEL_DEPLOY_KEY="tlt_encode"                                                  ## MAKE CHANGES HERE

# Locations to use for storing models artifacts
#
# If an absolute path is specified, the data will be written to that location
# Otherwise, a docker volume will be used (default).
#
# riva_init.sh will create a `rmir` and `models` directory in the volume or
# path specified. 
#
# RMIR ($riva_model_loc/rmir)
# Riva uses an intermediate representation (RMIR) for models
# that are ready to deploy but not yet fully optimized for deployment. Pretrained
# versions can be obtained from NGC (by specifying NGC models below) and will be
# downloaded to $riva_model_loc/rmir by `riva_init.sh`
# 
# Custom models produced by NeMo or TAO and prepared using riva-build
# may also be copied manually to this location $(riva_model_loc/rmir).
#
# Models ($riva_model_loc/models)
# During the riva_init process, the RMIR files in $riva_model_loc/rmir
# are inspected and optimized for deployment. The optimized versions are
# stored in $riva_model_loc/models. The riva server exclusively uses these
# optimized versions.
riva_model_loc="<add path>"                              ## MAKE CHANGES HERE (Replace with MODEL_LOC)

if [[ $riva_target_arch == "arm64" ]]; then
    riva_model_loc="`pwd`/model_repository"
fi

# The default RMIRs are downloaded from NGC by default in the above $riva_rmir_loc directory
# If you'd like to skip the download from NGC and use the existing RMIRs in the $riva_rmir_loc
# then set the below $use_existing_rmirs flag to true. You can also deploy your set of custom
# RMIRs by keeping them in the riva_rmir_loc dir and use this quickstart script with the
# below flag to deploy them all together.
use_existing_rmirs=false                                ## MAKE CHANGES HERE (Replace with true)

# Ensure you have permission to execute these scripts
! cd $RIVA_DIR && chmod +x ./riva_init.sh && chmod +x ./riva_start.sh
# Run Riva Init. This will fetch the containers/models
# YOU CAN SKIP THIS STEP IF YOU DID RIVA DEPLOY
! cd $RIVA_DIR && ./riva_init.sh config.sh
# Run Riva Start. This will deploy your model(s).
! cd $RIVA_DIR && ./riva_start.sh config.sh

Run Inference#

Once the Riva server is up-and-running with your models, you can send inference requests querying the server.

To send gRPC requests, you can install the Riva Python API bindings for the client by running the cell below. This is available as a pip package.

# Install the Client API Bindings
! pip install nvidia-riva-client

Connect to the Riva Server and Run Inference#

Now we can actually query the Riva server. The following cell queries the Riva server (using gRPC) to yield a result.

import riva.client
import IPython.display as ipd
import numpy as np

server = "localhost:50051"

auth = riva.client.Auth(uri=server)
client = riva.client.SpeechSynthesisService(auth)

resp = client.synthesize(
    text="Is it recognize speech or wreck a nice beach?",
    language_code="en-US",
    encoding=riva.client.AudioEncoding.LINEAR_PCM,
    sample_rate_hz=22050,
    # For a multispeaker model, please set uncomment the line below:
    # voice_name = "new_speaker.new_voice",
)
audio_samples = np.frombuffer(resp.audio, dtype=np.float32)
ipd.Audio(audio_samples, rate=22050)

You can stop all Docker containers before shutting down the Jupyter kernel. Caution: The following command will stop all running containers.

! docker stop $(docker ps -a -q)