Support Matrix#
Models#
Model Name |
Model ID |
Publisher |
---|---|---|
Eye Contact |
maxine-eye-contact |
NVIDIA |
Optimized Configurations#
GPU |
Precision |
---|---|
T4 |
FP16 |
A2, A10, A16, A40 |
FP16 |
L4, L40 |
FP16 |
NVIDIA RTX PRO 6000 Blackwell Server Edition |
FP16 |
The Eye-contact NIM is compatible with professional and consumer GPUs that have Tensor cores and are based on the following NVIDIA architectures: Blackwell, Ada, Ampere, and Turing. The RTX-based GPUs are also supported.
The Eye Contact NIM uses NVENC/NVDEC hardware acceleration for video encoding and decoding.
GPUs without NVENC/NVDEC hardware support are not supported, including A100, H100, and B100 products.
Some GPUs support only a limited number of concurrent NVENC sessions, which means the NIM can process only that same number of concurrent inputs on those GPUs.
Some GPUs support only certain YUV formats in H.264 reading and writing.
For details, refer to the Video Encode and Decode GPU Support Matrix.
Software#
NVIDIA Driver and prerequisite#
NVIDIA driver requirements and other prerequisites for Eye Contact NIM:
Prerequisite |
Version |
Download and install steps |
---|---|---|
NVIDIA Graphic Drivers for Linux |
571.21+ |
|
Docker |
latest |
Ubuntu, CentOS, Debian: https://docs.docker.com/engine/install/ Rocky Linux: https://docs.rockylinux.org/gemstones/containers/docker/ |
NVIDIA Container Toolkit |
latest |
Installation and configuration instructions: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html |
Eye Contact NIM uses the following NVIDIA software platforms:
Components |
Version |
---|---|
CUDA |
12.8.1 |
cuDNN |
9.7.1.26 |
TRT |
10.9.0.34 |
Triton Inference Server |
v2.50.0 |
DeepStream |
7.1 |