Performance#

Evaluation Process#

This section shows the latency and throughput numbers for streaming and offline configurations of the Riva ASR service on different GPUs. These numbers were captured after the pre-configured ASR pipelines from our Quick Start scripts were deployed. The Jasper, QuartzNet, Conformer, and Citrinet-1024 acoustic models were tested.

In streaming mode, the client and the server used audio chunks of the same duration (100ms, 160ms, and 800ms depending on the server configuration). Refer to the Results section for the chunk size value to use.

The Riva streaming client riva_streaming_asr_client, provided in the Riva image, was used with the --simulate_realtime flag to simulate transcription from a microphone, where each stream was doing three iterations over a sample audio file (1272-135031-0000.wav) from the LibriSpeech dev-clean dataset. The LibriSpeech datasets can be obtained from https://www.openslr.org/12.

The source code for the riva_streaming_asr_client can be obtained from https://github.com/nvidia-riva/cpp-clients.

The command used to measure performance was:

riva_streaming_asr_client \
   --chunk_duration_ms=<chunk_duration> \
   --simulate_realtime=true \
   --automatic_punctuation=true \
   --num_parallel_requests=<num_streams> \
   --word_time_offsets=true \
   --print_transcripts=false \
   --interim_results=false \
   --num_iterations=<3*num_streams> \
   --audio_file=1272-135031-0000.wav \
   --output_filename=/tmp/output.json

The riva_streaming_asr_client returns the following latency measurements:

  • intermediate latency: latency of responses returned with is_final == false

  • final latency: latency of responses returned with is_final == true

  • latency: the overall latency of all returned responses. This is what is tabulated in the following tables.

Refer to the following diagram for a schematic representation of the different latencies measured by the Riva streaming ASR client.

Schematic Diagram of Latencies Measured by Riva Streaming ASR Client

In offline mode, the command used to measure maximum throughput was:

riva_asr_client \
   --automatic_punctuation=true \
   --num_parallel_requests=32 \
   --word_time_offsets=true \
   --print_transcripts=false \
   --num_iterations=96 \
   --audio_file=1272-135031-0000x5.wav \
   --output_filename=/tmp/output.json

where 1272-135031-0000x5.wav is simply the 1272-135031-0000.wav audio file concatenated five times. The source code for the riva_asr_client can be obtained from: https://github.com/nvidia-riva/cpp-clients

Results#

Latencies and throughput measurements for streaming and offline configurations are reported in the following tables. Throughput is measured in RTFX (duration of audio transcribed / computation time).

Note

Audio files were iterated 1 time for Xavier AGX, Xavier NX, and Orin AGX and 3 times for all other experiments.

Note

If the language model is none, the inference is performed with a greedy decoder. If the language model is n-gram, then a beam decoder was used.

Note

The values in the tables are average values over 3 trials. The values in the table are rounded to the last significant digit according to standard deviation calculated on 3 trials. If a standard deviation is less than 0.001 of the average, then the corresponding value is rounded as if standard deviation equals 0.001 of the value.

For specifications of the hardware on which these measurements were collected, refer to the Hardware Specifications section. Please notice, that

  • results on AWS and GCP are computed using Riva 2.4.0

  • results On-Prem on GPUs A40, T4, L40, H100 on Riva 2.9.0.

  • other On-Prem results on Riva 2.8.0

Cloud instance descriptions for AWS and GCP.

Chunk size (ms): 160
Maximum effective # of streams with n-gram language model: 180
Maximum effective # of streams without language model (greedy generation): 192

Language model

# of streams

Latency (ms)

Throughput (RTFX)

avg

p50

p90

p95

p99

n-gram

1

13.9

13.57

13.9

14.2

28

1

n-gram

8

26.2

25.5

26

27

60

7.98

n-gram

16

37

35

45

45.4

90

15.93

n-gram

32

52

56

63

64

140

31.8

n-gram

48

62

65

70

72

150

47.6

n-gram

64

76

76

81

83

230

63.3

none

1

13.1

12.8

12.96

13.2

22

1

none

8

25

24.5

25

25

60

7.98

none

16

34

30

44

44

80

15.94

none

32

50

50

61

63

120

31.8

none

48

56

62

66

70

140

47.6

none

64

70

73

76

77

190

63.3

Below are tables which demonstrate what effect does number of CPUs have on latency and throughput. Measurements were made on-prem for English Conformer model.

Chunk size (ms): 160
Language model: n-gram
Maximum effective # of streams with n-gram language model: 176

# of streams

Latency (ms)

Throughput (RTFX)

avg

p50

p90

p95

p99

1

14

13.6

14

14

28

1

8

26.2

25.5

26.3

26.7

61

7.98

16

37

33

45

45.7

90

15.93

32

53

54

63.5

64

150

31.8

48

63

66

71

74

190

47.6

64

75

77

84

87

260

63.3

On-Prem Hardware Specifications#

GPU

NVIDIA DGX A100 40 GB

CPU

Model

AMD EPYC 7742 64-Core Processor

Thread(s) per core

2

Socket(s)

2

Core(s) per socket

64

NUMA node(s)

8

Frequency boost

enabled

CPU max MHz

2250

CPU min MHz

1500

RAM

Model

Micron DDR4 36ASF8G72PZ-3G2B2 3200MHz

Configured Memory Speed

2933 MT/s

RAM Size

32x64GB (2048GB Total)