Translation#

Local Deployment using Quick Start Scripts#

Riva includes Quick Start scripts to help you get started with Riva Speech AI Skills. These scripts are meant for deploying the services locally, for testing, and running the example applications.

  1. Download the scripts. Go to the Riva Quick Start for Data center or Embedded depending on the platform that you are using. Select the File Browser tab to download the scripts or use the NGC CLI tool to download from the command line.

    Data center

    ngc registry resource download-version nvidia/riva/riva_quickstart:2.19.0
    

    Embedded

    ngc registry resource download-version nvidia/riva/riva_quickstart_arm64:2.19.0
    
  2. Configure RIVA deployment: Modify the config.sh file within the quickstart directory with your preferred configuration.

    Description

    Parameter in config.sh

    Services to enable

    service_enabled_asr,
    service_enabled_tts,
    service_enabled_nmt

    Models to retrieve from NGC

    asr_acoustic_model,
    asr_language_code,
    tts_language_code

    Model storage location

    riva_model_loc (default value: riva-model-repo docker volume. Can be set to a local directory in host machine if needed)

    GPU selection for multi-GPU systems

    gpus_to_use (refer to Local (Docker) for more details)

    SSL/TLS certificate, key file location

    ssl_server_cert,
    ssl_server_key,
    ssl_root_cert,
    ssl_use_mutual_auth

  3. Initialize and start Riva. The initialization step downloads and prepares Docker images and models. The start script launches the server.

    Note

    This process can take up to an hour on an average internet connection. On the data center, each model is individually optimized for the target GPU after they have been downloaded. On embedded platforms, preoptimized models for the GPU on the NVIDIA Jetson are downloaded.

    Data center

    cd riva_quickstart_v2.19.0
    

    Note

    If you are using a vGPU environment, set the parameter for enabling unified memory pciPassthru<vgpu-id>.cfg.enable_uvm to 1, where <vgpu-id> should be replaced by the vGPU-id assigned to a VM. For example, to enable unified memory for two vGPUs that are assigned to a VM, set pciPassthru0.cfg.enable_uvm and pciPassthru1.cfg.enable_uvm to 1. For more information, refer to the NVIDIA Virtual GPU Software User Guide.

    Embedded

    cd riva_quickstart_arm64_v2.19.0
    

    Note

    If you are using the Jetson AGX Xavier or the Jetson NX Xavier platform, set the $riva_tegra_platform variable to xavier in the config.sh file within the quickstart directory.

    To use a USB device for audio input/output, connect it to the Jetson platform so it gets automatically mounted into the container.

    Initialize and start Riva

    bash riva_init.sh
    bash riva_start.sh
    
  4. Try walking through the different tutorials on GitHub. If running the Riva Quick Start scripts on a cloud service provider (such as AWS or GCP), ensure that your compute instance has an externally visible IP address. To run the tutorials, connect a browser window to the correct port (8888 by default) of that external IP address.

  5. Shut down the server when finished. After you have completed these steps and experimented with inferencing, run the riva_stop.sh script to stop the server.

For further details on how to customize a local deployment, refer to Local (Docker).

If using SSL/TLS, ensure to include the options described in this section to enable the secure deployment of the Riva server.

Note

  • For using the Riva translation services, refer to the Configure translation services instructions in the config.sh file within the quickstart directory.

  • By default, the RIVA server do not deploy NMT models, You need to set service_enabled_nmt to true to download the required models.

  • If NMT service is enabled, RIVA will download and deploy nmt_megatron_1b_any_any model by default. You can customize this configuration by uncommenting the models in models_nmt.

For Data center, issue the riva_start_client.sh script to start the client container with sample clients for each service. The script is located in the Quick Start folder (downloaded earlier in the Local Deployment using Quick Start Scripts section, step 1 above).

bash riva_start_client.sh

For Embedded, this step is not needed because the sample clients are already present in the Riva server container launched in the previous step.

Translate Text-to-Text (T2T)#

From within the Riva client container data center or the Riva server container embedded, run the following command to perform a text-to-text translation from English to German.

Run the following command to list the available models and choose the model to use. Ensure that the source and target language codes are supported by the selected model.

riva_nmt_t2t_client --list_models

Then, run the following command to perform a text-to-text translation from English to German.

riva_nmt_t2t_client --source_language_code="en-US" --target_language_code="de-DE" --text="This will become German words."

Translate Speech-to-Text (S2T)#

From within the Riva client container data center or the Riva server container embedded, run the following commands to perform a speech-to-text translation from English audio to German text.

  1. Run the following command to list the available models and choose the model to use.

    riva_nmt_streaming_s2t_client --list_models
    

    Note

    If your required source language is not listed, verify that the corresponding ASR model is enabled in config.sh. You may need to update the asr_language_code parameter in config.sh and re-run riva_init.sh to download and initialize the correct model.

    If you have set the following parameters in config.sh,

    service_enabled_nmt=true
    service_enabled_asr=true
    asr_language_code="en-US"
    

    the above command should print output like this

    $riva_nmt_streaming_s2t_client --list_models
    languages {
    key: "s2t_model"
    value {
        src_lang: "en"
        tgt_lang: "cs"
        tgt_lang: "da"
        tgt_lang: "de"
        ...
        }
    }
    
  2. Run the following command to perform a speech-to-text translation from English audio to German text.

    riva_nmt_streaming_s2t_client --audio_file=/opt/riva/wav/en-US_sample.wav --source_language_code="en-US" --target_language_code="de-DE"
    

Translate Speech-to-Speech (S2S)#

From within the Riva client container data center or the Riva server container embedded, run the following commands to perform an speech-to-speech translation from Spanish audio to English audio.

  1. Run the following command to list the available models and choose the model to use. Ensure that the source and target language codes are supported by the selected model.

    riva_nmt_streaming_s2s_client --list_models
    

    Note

    • If your required source language is not listed, verify that the corresponding ASR model is enabled in config.sh. You may need to update the asr_language_code parameter in config.sh and re-run riva_init.sh to download and initialize the correct model.

    • Similarly, if your required target language is not listed, verify that the corresponding TTS model is enabled in config.sh. You may need to update the tts_language_code parameter in config.sh and re-run riva_init.sh to download and initialize the correct model.

    If you have set the following parameters in config.sh,

    service_enabled_nmt=true
    service_enabled_asr=true
    service_enabled_tts=true
    asr_language_code="en-US"
    tts_language_code="de-DE"
    

    the above command should display output as following:

    $riva_nmt_streaming_s2s_client --list_models
    languages {
    key: "s2s_model"
    value {
        src_lang: "en"
        tgt_lang: "de"
        }
    }
    
  2. Run the following command to perform a speech-to-speech translation from English audio to German audio.

    riva_nmt_streaming_s2s_client --audio_file=/opt/riva/wav/en-US_sample.wav --source_language_code="en-US" --target_language_code="de-DE" --tts-voice-name=German-DE-Male-1.0 --tts_audio_file=output.wav
    

Next Steps#

In this Quick Start Guide, you learned the basics of deploying the Riva server with pretrained models and using the API. Specifically, you:

  • Installed the Riva server and pretrained models

  • Walked through some tutorials to use the Riva API

  • Executed Riva command-line clients to translate text or speech (NMT).

For more examples of how you can use Riva Speech AI Skills in real applications, follow the tutorials in GitHub. Additionally, you can build your own speech AI applications with Riva using available APIs like gRPC, Python libraries, and command-line clients.

To learn more about Riva Speech AI Skills, visit the NVIDIA Riva Developer page.