Getting Started#

Prerequisites#

To ensure that you have the supported hardware and software stack, check the Support Matrix.

NGC Authentication#

Generate an API Key#

LipSync NIM is available only through the AI for Media Private Access Program. Joining the Private Access Program gives you an NGC API key with the permissions required for this NIM.

You can generate a key at https://org.ngc.nvidia.com/setup/api-keys after joining the Private Access Program.

When creating an NGC API Personal key, ensure that at least NGC Catalog is selected from the Services Included dropdown. You can include more services if this key is to be reused for other purposes.

Note

Personal keys allow you to configure an expiration date, revoke or delete the key using an action button, and rotate the key as needed. For more information about key types, refer to NGC API Keys in the NGC User Guide.

Export the NGC API Key#

Pass the value of the API key to the docker run command in the next section as the NGC_API_KEY environment variable to download the appropriate models and resources when starting the NIM.

If you are not familiar with how to create the NGC_API_KEY environment variable, the simplest way is to export it in your terminal:

export NGC_API_KEY=<value>

Run one of the following commands to make the key available at startup:

# If using bash
echo "export NGC_API_KEY=<value>" >> ~/.bashrc

# If using zsh
echo "export NGC_API_KEY=<value>" >> ~/.zshrc

Note

Other, more secure options include saving the value in a file, so that you can retrieve with cat $NGC_API_KEY_FILE, or using a password manager.

Docker Login to NGC#

To pull the NIM container image from NGC, first authenticate with the NVIDIA Container Registry with the following command:

echo "$NGC_API_KEY" | docker login nvcr.io --username '$oauthtoken' --password-stdin

Use $oauthtoken as the username and NGC_API_KEY as the password. The $oauthtoken username is a special name that indicates that you will authenticate with an API key and not a user name and password.

Launching the NIM Container#

The following command launches the LipSync NIM container with the gRPC service. (For a list of parameters, see Runtime Parameters for the Container.)

# Choose manifest profile ID based on target architecture.
export MANIFEST_PROFILE_ID=<enter_valid_manifest_profile_id>

Then run the NIM launch command:

docker run -it --rm --name=lipsync-nim \
  --runtime=nvidia \
  --gpus all \
  --shm-size=8GB \
  -e NGC_API_KEY=$NGC_API_KEY \
  -e NIM_MANIFEST_PROFILE=$MANIFEST_PROFILE_ID \
  -e NIM_MAX_CONCURRENCY_PER_GPU=1 \
  -e NIM_HTTP_API_PORT=8000 \
  -e NIM_GRPC_API_PORT=8001 \
  -p 8000:8000 \
  -p 8001:8001 \
  nvcr.io/nim/nvidia/lipsync:latest

Note

The flag --gpus all is used to assign all available GPUs to the NIM container. To assign specific GPUs to the NIM container (in case of multiple GPUs available in your machine), use --gpus '"device=0,1,2..."'.

Model Manifest Profiles#

The following table lists manifest profile IDs that you can specify in MANIFEST_PROFILE_ID.

GPU Architecture (compute capability)

Manifest Profile ID

Blackwell (cc 12.0)

8243988b24d6b1f51848e66402ae499c4f2243f3c8f7b20a1b02d213ee322ee3

Ada (cc 8.9)

a8c0a6beef4177db4d6820dac81a454030135b0803e81d4e71b7f332d2ba2010

Ampere (cc 8.6)

f0297c0909751eeece2212aa90e0434fd843d35b0d86748ad37e10d1506b006a

Turing (cc 7.5)

775bc1995a45ddee953a3aa6c5bb036605b0b05a216d8bb6f7572d0f945d4b88

Note

MANIFEST_PROFILE_ID is an optional parameter. If the manifest profile ID is not supplied, the NIM automatically selects a matching profile ID based on the target hardware architecture.

However, if MANIFEST_PROFILE_ID is used, ensure that the associated GPU architecture is compatible with the target hardware. If an incorrect manifest profile ID is used, a deserialization error occurs on inference.

If the NIM launch is successful, you get a response similar to the following.

I1027 22:31:44.952125 123 grpc_server.cc:2560] "Started GRPCInferenceService at 127.0.0.1:9001"
I1027 22:31:44.952247 123 http_server.cc:4755] "Started HTTPService at 127.0.0.1:9000"
I1027 22:31:44.993329 123 http_server.cc:358] "Started Metrics Service at 127.0.0.1:9002"
Triton server is ready
[INFO AI4M BASE LOGGER 2025-10-27 22:31:46.097 PID:207] Using threading mode for gRPC service
[INFO AI4M BASE LOGGER 2025-10-27 22:31:46.097 PID:207] Starting threading gRPC service with 1 threads
[INFO AI4M BASE LOGGER 2025-10-27 22:31:46.105 PID:207] Using Insecure Server Credentials
[INFO AI4M BASE LOGGER 2025-10-27 22:31:46.107 PID:207] Listening to 0.0.0.0:8001

Note

By default, the LipSync NIM gRPC service is hosted on port 8001. You must use this port for inferencing requests. The port is configurable via the NIM_GRPC_API_PORT environment variable.

Environment Variables#

The following table describes the environment variables that can be passed into a NIM as a -e argument added to a docker run command:

ENV

Required?

Default

Notes

NGC_API_KEY

Yes

None

You must set this variable to the value of your personal NGC API key.

NIM_CACHE_PATH

Optional

/opt/nim/.cache

Location (in container) where the container caches model artifacts.

NIM_GRPC_API_PORT

No

8001

Publish the NIM service to the prescribed port inside the container. Be sure to adjust the port passed to the -p/--publish flag of docker run to reflect that (for example, -p $NIM_GRPC_API_PORT:$NIM_GRPC_API_PORT). The left side of the colon (:) is the port of your host address and does not need to match $NIM_GRPC_API_PORT. The right side of the colon (:) is the port inside the container, which must match NIM_GRPC_API_PORT (or 8001 if not set). Supported endpoints are /v1/license (returns the license information), /v1/metadata (returns metadata including asset information, license information, model information, and version), and /v1/metrics (exposes Prometheus metrics via an ASGI app endpoint).

NIM_MANIFEST_PROFILE

Optional

None

You must set this model profile to be able to download the specific model type supported on your GPU. For more about NIM_MANIFEST_PROFILE, refer to Model Manifest Profiles.

NIM_SSL_MODE

No

disabled

Set SSL security on the endpoints to tls or mtls. Defaults to unsecured endpoint.

NIM_SSL_CA_PATH

No

None

Set the path to CA root certificate inside the NIM. This is required only when NIM_SSL_MODE is mtls. For example, if the SSL certificates are mounted at /opt/nim/crt in the NIM, NIM_SSL_CA_PATH can be set to /opt/nim/crt/ssl_ca_cert.pem.

NIM_SSL_CERT_PATH

No

None

Set the path to the server’s public SSL certificate inside the NIM. This is required only when an SSL mode is enabled. For example, if the SSL certificates are mounted at /opt/nim/crt in the NIM, NIM_SSL_CERT_PATH can be set to /opt/nim/crt/ssl_cert_server.pem.

NIM_SSL_KEY_PATH

No

None

Set the path to the server’s private key inside the NIM. This is required only when an SSL mode is enabled. For example, if the SSL certificates are mounted at /opt/nim/crt in the NIM,NIM_SSL_KEY_PATH can be set to /opt/nim/crt/ssl_key_server.pem.

Runtime Parameters for the Container#

Flags

Description

-it

--interactive + --tty (see docker container run).

--rm

Delete the container after it stops (see docker container run).

--name=container-name

Give a name to the NIM container. Use any preferred value.

--runtime=nvidia

Ensure NVIDIA drivers are accessible in the container.

--gpus all

Expose NVIDIA GPUs inside the container. If you are running on a host with multiple GPUs, you need to specify which GPU to use. You can also specify multiple GPUs. For more information about mounting specific GPUs, see GPU Enumeration.

--shm-size=8GB

Allocate host memory for multi-process communication.

-e NIM_MAX_CONCURRENCY_PER_GPU

Number of concurrent inference requests to be supported by the NIM server per GPU (default: 1). Higher values consume more GPU memory and can cause out-of-memory errors.

-e NGC_API_KEY=$NGC_API_KEY

Provide the container with the token necessary to download adequate models and resources from NGC. See NGC Authentication.

-p <host_port>:<container_port>

Ports published by the container are directly accessible on the host port.

-e LIPSYNC_DEBUG_MODE

Environment variable to enable debug mode that overlays frame number, lipsync effect status and bounding boxes for each frame. Enable by setting it to 1. (default: 0).

Stopping the Container#

The following commands can be used to stop the container.

docker stop $CONTAINER_NAME
docker rm $CONTAINER_NAME