How do I use Riva ASR APIs with out-of-the-box models?#

This tutorial walks you through the basics of Riva Speech Skills ASR Services, specifically covering how to use Riva ASR APIs with out-of-the-box models.

NVIDIA Riva Overview#

NVIDIA Riva is a GPU-accelerated SDK for building Speech AI applications that are customized for your use case and deliver real-time performance.
Riva offers a rich set of speech and natural language understanding services such as:

  • Automated speech recognition (ASR)

  • Text-to-Speech synthesis (TTS)

  • A collection of natural language processing (NLP) services, such as named entity recognition (NER), punctuation, intent classification.

In this tutorial, we will interact with the automated speech recognition (ASR) APIs.

For more information about Riva, refer to the Riva developer documentation.

Transcription with Riva ASR APIs#

ASR takes an audio stream or audio buffer as input and returns one or more text transcripts, along with additional optional metadata. Speech recognition in Riva is a GPU-accelerated compute pipeline, with optimized performance and accuracy.
Riva provides state-of-the-art OOTB (out-of-the-box) models and pipelines for multiple languages, like English, Spanish, German, Russian and Mandarin, that can be easily deployed with the Riva Quick Start Scripts. Riva also supports easy customization of the ASR pipeline, in various ways, to meet your specific needs.
Refer to the Riva ASR documentation for more information.

Now, let’s generate the transcripts using Riva APIs, for some sample audio clips, with an OOTB pipeline, starting with English.

Requirements and setup#

  1. Start the Riva Speech Skills server.
    Follow the instructions in the Riva Quick Start Guide to deploy OOTB ASR models on the Riva Speech Skills server before running this tutorial. By default, only the English models are deployed.

  2. Install the Riva Client library.
    Follow the steps in the Requirements and setup for the Riva Client to install the Riva Client library.

Import the Riva client libraries#

Let’s import some of the required libraries, including the Riva Client libraries.

import io
import IPython.display as ipd
import grpc

import riva.client

Create a Riva client and connect to the Riva Speech API server#

The following URI assumes a local deployment of the Riva Speech API server is on the default port. In case the server deployment is on a different host or via a Helm chart on Kubernetes, use an appropriate URI.

auth = riva.client.Auth(uri='localhost:50051')

riva_asr = riva.client.ASRService(auth)

Offline recognition for English#

You can use Riva ASR in either streaming mode or offline mode. In streaming mode, a continuous stream of audio is captured and recognized, producing a stream of transcribed text. In offline mode, an audio clip of a set length is transcribed to text.
Let’s look at an example showing offline ASR API usage for English:

Make a gRPC request to the Riva Speech API server#

Riva ASR API supports .wav files in pulse-code modulation (PCM) format; including .alaw, .mulaw, and .flac formats with single channel.

Now, let’s make a gRPC request to the Riva Speech server for ASR with a sample .wav file in offline mode. Start by loading the audio.

# This example uses a .wav file with LINEAR_PCM encoding.
# read in an audio file from local disk
path = "./audio_samples/en-US_sample.wav"
with io.open(path, 'rb') as fh:
    content = fh.read()
ipd.Audio(path)

Next, create an audio RecognizeRequest object, setting the configuration parameters as required.

# Set up an offline/batch recognition request
config = riva.client.RecognitionConfig()
#req.config.encoding = ra.AudioEncoding.LINEAR_PCM    # Audio encoding can be detected from wav
#req.config.sample_rate_hertz = 0                     # Sample rate can be detected from wav and resampled if needed
config.language_code = "en-US"                    # Language code of the audio clip
config.max_alternatives = 1                       # How many top-N hypotheses to return
config.enable_automatic_punctuation = True        # Add punctuation when end of VAD detected
config.audio_channel_count = 1                    # Mono channel

Finally, submit the request to the server.

response = riva_asr.offline_recognize(content, config)
asr_best_transcript = response.results[0].alternatives[0].transcript
print("ASR Transcript:", asr_best_transcript)

print("\n\nFull Response Message:")
print(response)

Understanding ASR API parameters#

Riva ASR supports a number of options while making a transcription request to the gRPC endpoint, as shown in the previous section. Let’s learn more about these parameters:

  • encoding - Type of audio encoding of the input audio file. Supports (LINEAR_PCM, FLAC, MULAW or ALAW). Can be detected from audio file

  • sample_rate_hertz - Sampling rate of the input audio in Hz. Note that the sample rate can be detected automatically from the audio .wav file and resampled if needed, making this parameter optional.

  • language_code - Language of the input audio. “en-US” represents English (US). Other options include (es-US, de-DE, ru-RU, zh-CN). We will explore ASR for non-English languages in the next section.

  • max_alternatives - Determines the number of top alternative transcriptions to return

  • enable_automatic_punctuation - Adds a punctuation at the end of VAD (Voice Activity Detection).

  • audio_channel_count - Number of audio channels. Typical microphones have 1 audio channel.

Offline recognition for non-English languages - Spanish example#

In the previous section, we went through the Riva API usage and understood the different parameters of the ASR API. Now, let’s look at using the ASR APIs for non-English languages, like Spanish, in offline mode.

Note that we can similarly run Riva ASR for German, Russian, and Mandarin by using their corresponding language codes. We will elaborate on this at the end of this section.

Requirements and Setup for Spanish ASR:#

The requirements and setup steps for non-English ASR (in this case Spanish ASR) is the almost the same as for English ASR. The only difference is before running inference on the Spanish audio, we need to first deploy the Spanish ASR pipeline on the Riva Speech Skills server.

Note: The Riva Speech Skills server Quick Start Guide, that we followed in the Requirements and Setup section above for English ASR, explains how to deploy only English models by default.

  1. Start the Riva Speech Skills server, with the Spanish ASR pipeline.
    1.1. Navigate to the Quick Start Guide folder. You downloaded this folder in the Requirements and setup section above.

    1.2. Run bash riva_stop.sh to shut down the running Riva Speech Skills server. If Riva Speech Skills server is not currently running, you can skip this step.

    1.3. [OPTIONAL] Run bash riva_clean.sh to clean up previous local Riva installation. This stops and removes all Riva-related containers, as well as deletes the Docker volume or directory used to store model files. The Docker images can also be removed, however, you’ll be asked for confirmation before removal.

    1.4. Update the config.sh file: Update the language_code=("en-US") line to include the Spanish (language code "es-US") model according to the instructions above this line in the config.sh script.

    1.5. Rerun bash riva_init.sh to download and initialize the Spanish models and pipeline. If you see any errors during this step, start again from step 1.3.

    1.6. Rerun bash riva_start.sh to restart the Riva Speech Skills server.

Make a gRPC request to the Riva Speech API server#

Let’s make a gRPC request to the Riva Speech server for ASR with a sample Spanish .wav file in offline mode.

Like before, start by loading the audio.

# This example uses a .wav file with LINEAR_PCM encoding.
# read in an audio file from local disk
path = "audio_samples/es-US_sample.wav" #Link to the Spanish sample audio file
with io.open(path, 'rb') as fh:
    content = fh.read()
ipd.Audio(path)

As with English, create an audio RecognizeRequest object, setting the configuration parameters as required. Notice that we have updated language_code of the request configuration to the Spanish language code ("es-US").

# Set up an offline/batch recognition request
config = riva.client.RecognitionConfig()
#req.config.encoding = ra.AudioEncoding.LINEAR_PCM    # Audio encoding can be detected from wav
#req.config.sample_rate_hertz = 0                     # Sample rate can be detected from wav and resampled if needed
config.language_code = "es-US"                    # Language code of the audio clip. Set to Spanish
config.max_alternatives = 1                       # How many top-N hypotheses to return
config.enable_automatic_punctuation = True        # Add punctuation when end of VAD detected
config.audio_channel_count = 1                    # Mono channel

Finally, submit the request to the server.

response = riva_asr.offline_recognize(content, config)
asr_best_transcript = response.results[0].alternatives[0].transcript
print("ASR Transcript:", asr_best_transcript)

print("\n\nFull Response Message:")
print(response)

We can similarly run Riva ASR for German, Russian, and Mandarin by setting their corresponding language codes (de-DE, ru-RU, and zh-CN) in the request configuration. Ensure that these pipelines are deployed on the Riva Speech Skills server as instructed in the Requirements and Setup for Spanish ASR.

Go deeper into Riva capabilities#

Now that you have a basic introduction to the Riva ASR APIs, you can try:

Additional Riva tutorials#

Checkout more Riva ASR (and TTS) tutorials here to understand how to use some of the advanced features of Riva ASR, including customizing ASR for your specific needs.

Sample applications#

Riva comes with various sample applications. They demonstrate how to use the APIs to build applications such as a chatbot, a domain specific speech recognition, keyword (entity) recognition system, or simply how Riva allows scaling out for handling massive amounts of requests at the same time. Refer to (SpeechSquad) for more information.
Refer to the Sample Application section in the Riva developer documentation for more information.

Riva Text-To-Speech (TTS)#

Riva’s TTS offering comes with two OOTB voices that can be used in streaming or batch inference modes. They can be easily deployed using the Riva Quick Start scripts. Follow this link to understand Riva’s TTS capabilities. Explore how to use Riva TTS APIs with the OOTB voices with this Riva TTS tutorial.

Additional resources#

For more information about each of the APIs and their functionalities, refer to the documentation.