Conformer CTC#

For client installation and sample audio instructions, refer to the Deploy and Run ASR Models page.

Deploy the NIM Container#

export CONTAINER_ID=riva-asr
export NIM_TAGS_SELECTOR="name=conformer-ctc-riva-es-us,mode=all"

docker run -it --rm --name=$CONTAINER_ID \
  --runtime=nvidia \
  --gpus '"device=0"' \
  --shm-size=8GB \
  -e NGC_API_KEY \
  -e NIM_HTTP_API_PORT=9000 \
  -e NIM_GRPC_API_PORT=50051 \
  -p 9000:9000 \
  -p 50051:50051 \
  -e NIM_TAGS_SELECTOR \
  nvcr.io/nim/nvidia/$CONTAINER_ID:latest

For additional profile options, refer to the ASR support matrix.

Run Inference#

Copy a sample audio file from the NIM container or use your own.

docker cp $CONTAINER_ID:/opt/riva/wav/es-US_sample.wav .

Streaming#

Ensure the NIM is deployed with a streaming mode model.

python3 python-clients/scripts/asr/transcribe_file.py \
  --server 0.0.0.0:50051 \
  --list-models

The input speech file is streamed to the service chunk-by-chunk.

python3 python-clients/scripts/asr/transcribe_file.py \
  --server 0.0.0.0:50051 \
  --language-code es-US --automatic-punctuation \
  --input-file es-US_sample.wav

Offline#

Ensure the NIM is deployed with an offline mode model.

python3 python-clients/scripts/asr/transcribe_file_offline.py \
  --server 0.0.0.0:50051 \
  --list-models

The input speech file is sent to the service in one shot.

python3 python-clients/scripts/asr/transcribe_file_offline.py \
  --server 0.0.0.0:50051 \
  --language-code es-US --automatic-punctuation \
  --input-file es-US_sample.wav
curl -s http://0.0.0.0:9000/v1/audio/transcriptions -F language=es \
  -F file="@es-US_sample.wav"
python3 python-clients/scripts/asr/realtime_asr_client.py \
  --server 0.0.0.0:9000 \
  --language-code es-US --automatic-punctuation \
  --input-file es-US_sample.wav