Deploying Your Custom Model into Riva

This section provides a brief overview on the two main tools used in the deployment process:

  1. The build phase using riva-build.

  2. The deploy phase using riva-deploy.

Build process

For your custom trained model, refer to the riva-build phase (ASR, NLP, TTS) for your model type. At the end of this phase, you’ll have the Riva Model Intermediate Representation (RMIR) archive for your custom model.

Deploy process

At this point, you already have your RMIR archive. Now, you have two options for deploying this RMIR.

Option 1: Use the Quick Start scripts (riva_init.sh and riva_start.sh) with the appropriate parameters in config.sh.

Option 2: Manually run riva-deploy and then start riva-server with the target model repo.

Option 2: Using riva-deploy and the Riva Speech Container (Advanced)

  1. Execute riva-deploy. Refer to the Deploy section in Overview for a brief overview on riva-deploy.

    The above command creates the Triton Inference Server model repository at /data/models. If you want to write to any other location other than /data/models, this will require additional manual changes in the embedded artifact directories within the configs within some of the Triton Inference Server model repositories that has model specific artifacts such as class labels. Therefore, stick with /data/models unless you are familiar with Triton Inference Server model repository configurations.

  2. Manually start the riva-server Docker container using docker run.

    After the Triton Inference Server model repository for your custom model is generated, start the Riva server on that target repo. The following command assumes you generated the model repo at /data/models.

    docker run -d --gpus 1 --init --shm-size=1G --ulimit memlock=-1 --ulimit stack=67108864 \
            -v /data:/data             \
            -p 50051                            \
            -e '\''CUDA_VISIBLE_DEVICES=0'\''   \
            --name riva-speech                \
            riva-api                          \
            start-riva  --riva-uri=0.0.0.0:50051 --nlp_service=true --asr_service=true --tts_service=true
    

    This command launches the Riva Speech Service API server similar to the Quick Start script riva_start.sh.

    Example output:

    Starting Riva Speech Services
    > Waiting for Triton server to load all models...retrying in 10 seconds
    > Waiting for Triton server to load all models...retrying in 10 seconds
    > Waiting for Triton server to load all models...retrying in 10 seconds
    > Triton server is ready…
    
  3. Verify that the servers have started correctly and check that the output of docker logs riva-speech shows:

    I0428 03:14:50.440943 1 riva_server.cc:66] TTS Server connected to Triton Inference Server at 0.0.0.0:8001
    I0428 03:14:50.440943 1 riva_server.cc:66] NLP Server connected to Triton Inference Server at 0.0.0.0:8001
    I0428 03:14:50.440951 1 riva_server.cc:68] ASR Server connected to Triton Inference Server at 0.0.0.0:8001
    I0428 03:14:50.440955 1 riva_server.cc:71] Riva Conversational AI Server listening on 0.0.0.0:50051