Getting Started#
Prerequisites#
To ensure that you have the supported hardware and software stack, check the Support Matrix.
NVIDIA GPU with Tensor cores and supported NVDEC/NVENC (for hardware decoding).
Recent NVIDIA driver and NVIDIA Container Toolkit correctly installed.
Docker Engine installed and configured to access GPUs.
NGC Authentication#
Generate an API Key#
NVIDIA Synthetic Video Detector NIM is available only through the AI for Media Private Access Program. Joining the Private Access Program gives you an NGC API key with the permissions required for this NIM.
You can generate a key at NGC API Keys after joining the Private Access Program.
When creating an NGC API Personal key, ensure that at least NGC Catalog is selected from the Services Included dropdown. You can include more services if this key is to be reused for other purposes.
Note
Personal keys allow you to configure an expiration date, revoke or delete the key using an action button, and rotate the key as needed. For more information about key types, refer to NGC API Keys in the NGC User Guide.
Export the API Key#
Pass the value of the API key to the docker run command in the next section as the NGC_API_KEY environment variable to download the appropriate models and resources when starting the NIM.
If you are not familiar with how to create the NGC_API_KEY environment variable, the simplest way is to export it in your terminal:
export NGC_API_KEY=<value>
Run one of the following commands to make the key available at startup:
# If using bash
echo "export NGC_API_KEY=<value>" >> ~/.bashrc
# If using zsh
echo "export NGC_API_KEY=<value>" >> ~/.zshrc
Note
Other, more secure options include saving the value in a file, so that you can retrieve with cat $NGC_API_KEY_FILE, or using a password manager.
Docker Login to NGC#
To pull the NIM container image from NGC, first authenticate with the NVIDIA Container Registry with the following command:
echo "$NGC_API_KEY" | docker login nvcr.io --username '$oauthtoken' --password-stdin
Use $oauthtoken as the username and NGC_API_KEY as the password. The $oauthtoken username is a special name that indicates that you will authenticate with an API key and not a user name and password.
Launching the Container (Synthetic Video Detector NIM)#
The following command launches the Synthetic Video Detector NIM container with the gRPC service. (For a list of parameters, see Runtime Parameters for the Container.)
# Choose manifest profile id based on target architecture.
export MANIFEST_PROFILE_ID=<enter_valid_manifest_profile_id>
# Run the container
docker run -it --rm --name=synthetic-video-detector-nim \
--runtime=nvidia \
--gpus all \
--shm-size=8GB \
-e NGC_API_KEY=$NGC_API_KEY \
-e NIM_MANIFEST_PROFILE=$MANIFEST_PROFILE_ID \
-p 8000:8000 \
-p 8001:8001 \
nvcr.io/nim/nvidia/synthetic-video-detector:latest
You can also cache the model manifest locally. For more details, refer to Model Caching.
Note
The flag --gpus all assigns all available GPUs to the Docker container.
To assign specific GPUs to the Docker container (in case of multiple GPUs available in your machine), use --gpus '"device=0,1,2..."'.
Model Manifest Profiles#
The following table lists manifest profile IDs that you can specify in MANIFEST_PROFILE_ID.
GPU Architecture (compute capability) |
Manifest profile ID |
|---|---|
Blackwell (cc 12.0) |
3ce493f31eb1718ca928ae45a6995fc585f7571065106db509e7fce4b6f6d3aa |
Ada (cc 8.9) |
6abf19cf36a0d5498b77c466780ac80c8224e641457f4f33a7df694810e2d746 |
Ampere (cc 8.6) |
15d466e43b11fa523e0662603f09bce6e5c7fc92fba33ea5c6122b98ec546bd8 |
Turing (cc 7.5) |
ae4879839cd92b9ca86791d2455b3ce72261f485f00a89e2056e11c3e69d4bc3 |
Note
MANIFEST_PROFILE_ID is an optional parameter. If the manifest profile ID is not supplied, the NIM automatically selects a matching profile ID based on the target hardware architecture.
However, if MANIFEST_PROFILE_ID is used, ensure that the associated GPU architecture is compatible with the target hardware. If an incorrect manifest profile ID is used, a deserialization error occurs on inference.
If the docker run command runs successfully, you get a response similar to the following.
[INFO AI4M BASE LOGGER 2025-12-08 06:49:53.779 PID:1] SUCCESS: DINOv2+v3 TensorRT inference engine initialized!
[INFO AI4M BASE LOGGER 2025-12-08 06:49:53.779 PID:1] Service initialization complete
[INFO AI4M BASE LOGGER 2025-12-08 06:49:53.779 PID:1] Using threading mode for gRPC service
[INFO AI4M BASE LOGGER 2025-12-08 06:49:53.779 PID:1] Starting threading gRPC service with 1 threads
[INFO AI4M BASE LOGGER 2025-12-08 06:49:53.787 PID:1] Using Insecure Server Credentials
[INFO AI4M BASE LOGGER 2025-12-08 06:49:53.789 PID:1] Listening to 0.0.0.0:8001
Note
By default, the Synthetic Video Detector gRPC service is hosted on port 8001. You must use this port for inferencing requests.
Environment Variables#
The following table describes the environment variables that can be passed into a NIM as a -e argument added to a docker run command:
ENV |
Required? |
Default |
Notes |
|---|---|---|---|
|
Yes |
None |
You must set this variable to the value of your personal NGC API key. |
|
Optional |
|
Location (in container) where the container caches model artifacts. |
|
No |
|
Publish the NIM service to the prescribed port inside the container. Be sure to adjust the port passed to the |
|
Optional |
None |
You must set this model profile to be able to download the specific model type supported on your GPU. For more about |
|
No |
disabled |
Set SSL security on the endpoints to |
|
No |
None |
Set the path to CA root certificate inside the NIM. This is required only when |
|
No |
None |
Set the path to the server’s public SSL certificate inside the NIM. This is required only when an SSL mode is enabled. For example, if the SSL certificates are mounted at |
|
No |
None |
Set the path to the server’s private key inside the NIM. This is required only when an SSL mode is enabled. For example, if the SSL certificates are mounted at |
Runtime Parameters for the Container#
Flags |
Description |
|---|---|
|
|
|
Delete the container after it stops (see docker container run in Docker Docs). |
|
Give a name to the NIM container. Use any preferred value. |
|
Ensure NVIDIA drivers are accessible in the container. |
|
Expose NVIDIA GPUs inside the container. If you are running on a host with multiple GPUs, you need to specify which GPU to use. You can also specify multiple GPUs. For more information about mounting specific GPUs, refer to GPU Enumeration. |
|
Allocate host memory for multi-process communication. |
|
Provide the container with the token necessary to download adequate models and resources from NGC. Refer to NGC Authentication. |
|
Environment variable to configure maximum file size (in MB) supported by the server (default: 1024). |
|
Ports published by the container are directly accessible on the host port. |
Stopping the Container#
The following commands can be used to stop the container.
docker stop $CONTAINER_NAME
docker rm $CONTAINER_NAME