How do I use Riva ASR APIs with out-of-the-box models?

This tutorial walks you through the basics of Riva Speech Skills ASR Services, specifically covering how to use Riva ASR APIs with out-of-the-box models.

NVIDIA Riva Overview

NVIDIA Riva is a GPU-accelerated SDK for building Speech AI applications that are customized for your use case and deliver real-time performance.
Riva offers a rich set of speech and natural language understanding services such as:

  • Automated speech recognition (ASR)

  • Text-to-Speech synthesis (TTS)

  • A collection of natural language processing (NLP) services, such as named entity recognition (NER), punctuation, intent classification.

In this tutorial, we will interact with the automated speech recognition (ASR) APIs.

For more information about Riva, refer to the Riva developer documentation.

Transcription with Riva ASR APIs

ASR takes an audio stream or audio buffer as input and returns one or more text transcripts, along with additional optional metadata. Speech recognition in Riva is a GPU-accelerated compute pipeline, with optimized performance and accuracy.
Riva provides state of the art OOTB(out-of-the-box) models and pipelines for multiple languages, like English, Spanish, German, Russian and Mandarin, that can be easily deployed with the Riva Quick Start Scripts. Riva also supports easy customization of the ASR pipeline, in various ways, to meet your specific needs.
Refer to the Riva ASR documentation for more information.

Now, let’s generate the transcripts using Riva APIs, for some sample audio clips, with an OOTB pipeline, starting with English.

Requirements and setup

  1. Start the Riva Speech Skills server.
    Follow the instructions in the Riva Quick Start Guide to deploy OOTB ASR models on the Riva Speech Skills server before running this tutorial. By default, only the English models are deployed.

  2. Install the Riva Client library.
    Follow the steps in the Requirements and setup for the Riva Client to install the Riva Client library.

Import the Riva client libraries

Let’s import some of the required libraries, including the Riva Client libraries.

import io
import IPython.display as ipd
import grpc

import riva_api.riva_asr_pb2 as rasr
import riva_api.riva_asr_pb2_grpc as rasr_srv
import riva_api.riva_audio_pb2 as ra

Create a Riva client and connect to the Riva Speech API server

The following URI assumes a local deployment of the Riva Speech API server is on the default port. In case the server deployment is on a different host or via a Helm chart on Kubernetes, use an appropriate URI.

channel = grpc.insecure_channel('localhost:50051')

riva_asr = rasr_srv.RivaSpeechRecognitionStub(channel)

Offline recognition for English

You can use Riva ASR in either streaming mode or offline mode. In streaming mode, a continuous stream of audio is captured and recognized, producing a stream of transcribed text. In offline mode, an audio clip of a set length is transcribed to text.
Let’s look at an example showing offline ASR API usage for English:

Make a gRPC request to the Riva Speech API server

Riva ASR API supports .wav files in pulse-code modulation (PCM) format; including .alaw, .mulaw, and .flac formats with single channel.

Now, let’s make a gRPC request to the Riva Speech server for ASR with a sample .wav file in offline mode. Start by loading the audio.

# This example uses a .wav file with LINEAR_PCM encoding.
# read in an audio file from local disk
path = "./audio_samples/en-US_sample.wav"
with io.open(path, 'rb') as fh:
    content = fh.read()
ipd.Audio(path)

Next, create an audio RecognizeRequest object, setting the configuration parameters as required.

# Set up an offline/batch recognition request
req = rasr.RecognizeRequest()
req.audio = content                                   # raw bytes
#req.config.encoding = ra.AudioEncoding.LINEAR_PCM    # Audio encoding can be detected from wav
#req.config.sample_rate_hertz = 0                     # Sample rate can be detected from wav and resampled if needed
req.config.language_code = "en-US"                    # Language code of the audio clip
req.config.max_alternatives = 1                       # How many top-N hypotheses to return
req.config.enable_automatic_punctuation = True        # Add punctuation when end of VAD detected
req.config.audio_channel_count = 1                    # Mono channel

Finally, submit the request to the server.

response = riva_asr.Recognize(req)
asr_best_transcript = response.results[0].alternatives[0].transcript
print("ASR Transcript:", asr_best_transcript)

print("\n\nFull Response Message:")
print(response)
ASR Transcript: What is natural language processing? 


Full Response Message:
results {
  alternatives {
    transcript: "What is natural language processing? "
    confidence: 1.0
  }
  channel_tag: 1
  audio_processed: 4.1519999504089355
}

Understanding ASR API parameters

Riva ASR supports a number of options while making a transcription request to the gRPC endpoint, as shown in the previous section. Let’s learn more about these parameters:

  • encoding - Type of audio encoding of the input audio file. Supports (LINEAR_PCM, FLAC, MULAW or ALAW). Can be detected from audio file

  • sample_rate_hertz - Sample rate can be detected from the audio .wav file and resampled if needed.

  • language_code - Language of the input audio. “en-US” represents English (US). Other options include (es-US, de-DE, ru-RU, zh-CN). We will explore ASR for non-English languages in the next section.

  • enable_automatic_punctuation - Adds a punctuation at the end of VAD (Voice Activity Detection).

  • audio_channel_count - Number of audio channels. Typical microphones have 1 audio channel.

Offline recognition for non-English languages - Spanish example

In the previous section, we went through the Riva API usage and understood the different parameters of the ASR API. Now, let’s look at using the ASR APIs for non-English languages, like Spanish, in offline mode.

Requirements and Setup for Spanish ASR:

The requirements and setup steps for non-English ASR (in this case Spanish ASR) is the almost the same as for English ASR. The only difference is before running inference on the Spanish audio, we need to first deploy the Spanish ASR pipeline on the Riva Speech Skills server.

Note: The Riva Speech Skills server Quick Start Guide, that we followed in the Requirements and Setup section above for English ASR, explains how to deploy only English models by default.

  1. Start the Riva Speech Skills server, with the Spanish ASR pipeline.
    1.1. Navigate to the Quick Start Guide folder. You downloaded this folder in the Requirements and setup section above.

    1.2. Run bash riva_stop.sh to shut down the running Riva Speech Skills server. If Riva Speech Skills server is not currently running, you can skip this step.

    1.3. Update the config.sh file: Update the language_code=("en-US") line to include the Spanish model according to the instructions above this line in the config.sh script.

    1.4. Rerun bash riva_init.sh to download and initialize the Spanish models and pipeline.

    1.5. Rerun bash riva_start.sh to restart the Riva Speech Skills server.

Make a gRPC request to the Riva Speech API server

Let’s make a gRPC request to the Riva Speech server for ASR with a sample Spanish .wav file in offline mode.

Like before, start by loading the audio.

# This example uses a .wav file with LINEAR_PCM encoding.
# read in an audio file from local disk
path = "audio_samples/es-US_sample.wav" #Link to the Spanish sample audio file
with io.open(path, 'rb') as fh:
    content = fh.read()
ipd.Audio(path)

As with English, create an audio RecognizeRequest object, setting the configuration parameters as required. Notice that we have updated language_code of the request configuration to the Spanish language code ("es-US").

# Set up an offline/batch recognition request
req = rasr.RecognizeRequest()
req.audio = content                                   # raw bytes
#req.config.encoding = ra.AudioEncoding.LINEAR_PCM    # Audio encoding can be detected from wav
#req.config.sample_rate_hertz = 0                     # Sample rate can be detected from wav and resampled if needed
req.config.language_code = "es-US"                    # Language code of the audio clip. Set to Spanish
req.config.max_alternatives = 1                       # How many top-N hypotheses to return
req.config.enable_automatic_punctuation = True        # Add punctuation when end of VAD detected
req.config.audio_channel_count = 1                    # Mono channel

Finally, submit the request to the server.

response = riva_asr.Recognize(req)
asr_best_transcript = response.results[0].alternatives[0].transcript
print("ASR Transcript:", asr_best_transcript)

print("\n\nFull Response Message:")
print(response)
ASR Transcript: Existen mutaciones que alteran los pigmentos de color basado en carotenoides, pero son raras. 


Full Response Message:
results {
  alternatives {
    transcript: "Existen mutaciones que alteran los pigmentos de color basado en carotenoides, pero son raras. "
    confidence: 1.0
  }
  channel_tag: 1
  audio_processed: 10.031999588012695
}

We can similarly run Riva ASR for German, Russian, and Mandarin by setting their corresponding language codes (de-DE, ru-RU, and zh-CN) in the request configuration. Ensure that these pipelines are deployed on the Riva Speech Skills server as instructed in the Requirements and Setup for Spanish ASR.

Go deeper into Riva capabilities

Now that you have a basic introduction to the Riva ASR APIs, you can try:

Additional Riva tutorials

Checkout more Riva ASR (and TTS) tutorials here to understand how to use some of the advanced features of Riva ASR, including customizing ASR for your specific needs.

Sample applications

Riva comes with various sample applications. They demonstrate how to use the APIs to build applications such as a chatbot, a domain specific speech recognition, keyword (entity) recognition system, or simply how Riva allows scaling out for handling massive amounts of requests at the same time. Refer to (SpeechSquad) for more information.
Refer to the Sample Application section in the Riva developer documentation for more information.

Riva Text-To-Speech (TTS)

Riva’s TTS offering comes with two OOTB voices that can be used in streaming or batch inference modes. They can be easily deployed using the Riva Quick Start scripts. Follow this link to understand Riva’s TTS capabilities. Explore how to use Riva TTS APIs with the OOTB voices with this Riva TTS tutorial.

Additional resources

For more information about each of the APIs and their functionalities, refer to the documentation.