Performance#

Evaluation Process#

This section shows the latency and throughput numbers for streaming and offline configurations of the Riva ASR service on different GPUs. These numbers were captured after the preconfigured ASR pipelines from our Quick Start scripts were deployed. The Jasper, QuartzNet, Conformer, and Citrinet-1024 acoustic models were tested.

In streaming mode, the client and the server used audio chunks of the same duration (100ms, 160ms, and 800ms depending on the server configuration). Refer to the Results section for the chunk size value to use.

The Riva streaming client riva_streaming_asr_client, provided in the Riva image, was used with the --simulate_realtime flag to simulate transcription from a microphone, where each stream was doing three iterations over a sample audio file (1272-135031-0000.wav) from the LibriSpeech dev-clean dataset. The Librispeech datasets can be obtained from https://www.openslr.org/12.

The source code for the riva_streaming_asr_client can be obtained from https://github.com/nvidia-riva/cpp-clients.

The command used to measure performance was:

riva_streaming_asr_client \
   --chunk_duration_ms=<chunk_duration> \
   --simulate_realtime=true \
   --automatic_punctuation=true \
   --num_parallel_requests=<num_streams> \
   --word_time_offsets=true \
   --print_transcripts=false \
   --interim_results=false \
   --num_iterations=<3*num_streams> \
   --audio_file=1272-135031-0000.wav \
   --output_filename=/tmp/output.json

The riva_streaming_asr_client returns the following latency measurements:

  • intermediate latency: latency of responses returned with is_final == false

  • final latency: latency of responses returned with is_final == true

  • latency: the overall latency of all returned responses. This is what is tabulated in the following tables.

Refer to the following diagram for a schematic representation of the different latencies measured by the Riva streaming ASR client.

Schematic Diagram of Latencies Measured by Riva Streaming ASR Client

In offline mode, the command used to measure maximum throughput was:

riva_asr_client \
   --automatic_punctuation=true \
   --num_parallel_requests=32 \
   --word_time_offsets=true \
   --print_transcripts=false \
   --num_iterations=96 \
   --audio_file=1272-135031-0000x5.wav \
   --output_filename=/tmp/output.json

where 5x_1272-135031-0000.wav is simply the 1272-135031-0000.wav audio file concatenated five times. The source code for the riva_asr_client can be obtained from: https://github.com/nvidia-riva/cpp-clients

Results#

Latencies and throughput measurements for streaming and offline configurations are reported in the following tables. Throughput is measured in RTFX (duration of audio transcribed / computation time).

Note

If the language model is none, the inference is performed with a greedy decoder. If the language model is n-gram, then a beam decoder was used.

For specifications of the hardware on which these measurements were collected, refer to the Hardware Specifications section.

Chunk size (ms): 160
Language model: n-gram
Maximum effective # of streams with n-gram language model: 247

# of streams

Latency (ms)

Throughput (RTFX)

avg

p50

p90

p95

p99

1

9.91

9.685

9.96

10.15

16.74

0.999460

8

14.48

13.9

14.95

15.5

33.8

7.9864

16

23.0

22.97

25.1

25.81

59.5

15.9493

32

34.9

33.9

36.3

37.4

100

31.814

48

46.2

44.5

48.8

50.0

142

47.623

64

55.4

53.0

58.1

60.8

202

63.35

Hardware Specifications#

GPU

NVIDIA DGX A100 40 GB

CPU

Model

AMD EPYC 7742 64-Core Processor

Thread(s) per core

2

Socket(s)

2

Core(s) per socket

64

NUMA node(s)

8

Frequency boost

enabled

CPU max MHz

2250

CPU min MHz

1500

RAM

Model

Micron DDR4 36ASF8G72PZ-3G2B2 3200MHz

Configured Memory Speed

2933 MT/s

RAM Size

32x64GB (2048GB Total)