Support Matrix#
Models#
Model Name |
Model ID |
Publisher |
|---|---|---|
LipSync |
lipsync |
NVIDIA |
Optimized Configurations#
Server GPU |
Precision |
|---|---|
T4 |
FP16 |
A2, A10, A16, A40 |
FP16 |
L4, L40, L40s |
FP16 |
B40 |
FP16 |
Other architectures (Consumer RTX GPUs)#
Consumer GPU |
Precision |
|---|---|
RTX 4090 |
FP16 |
RTX 5090, 5080 |
FP16 |
The LipSync NIM is compatible with professional and consumer GPUs that have Tensor cores and are based on the following NVIDIA architectures: Blackwell, Ada, Ampere, and Turing. The RTX-based GPUs are also supported.
The NIM requires NVENC/NVDEC hardware. GPUs without NVENC/NVDEC hardware support are not supported, including A100, H100, and B100 products. For details about supported GPUs and H264 YUV formats, refer to the Video Encode and Decode GPU Support Matrix.
Software#
NVIDIA Driver and Prerequisites#
NVIDIA driver requirements and other prerequisites for LipSync NIM:
Prerequisite |
Version |
Download and install steps |
|---|---|---|
NVIDIA Graphic Drivers for Linux |
571.21+ |
|
Docker |
latest |
Ubuntu, CentOS, and Debian: https://docs.docker.com/engine/install/; Rocky Linux: https://docs.rockylinux.org/gemstones/containers/docker/ |
NVIDIA Container Toolkit |
latest |
Installation and configuration instructions: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html |
LipSync NIM uses the following NVIDIA software platforms:
Components |
Version |
|---|---|
CUDA |
12.8.1 |
cuDNN |
9.7.1.26 |
TRT |
10.9.0.34 |
Triton Inference Server |
v2.56.0 |
DeepStream |
8.0 |