Advanced Usage#
This guide covers advanced configuration options for the NVIDIA Synthetic Video Detector NIM.
Model Behavior and Classification Threshold#
The Synthetic Video Detector is designed to identify synthetic (AI-generated) videos. The default classification threshold is set conservatively to minimize the chance of missing any synthetic video. You can adjust the classification threshold to suit your use case.
The following table describes the behavior at different thresholds:
Threshold |
Behavior |
Recommended Use Case |
|---|---|---|
0.3 (default) |
Conservative: Prioritizes catching all synthetic content. Might produce more false positives on authentic videos. |
High-stakes screening in which missing a synthetic video is unacceptable. |
0.5 (balanced) |
Balanced: Provides equal weight to correctly identifying both synthetic and authentic content. |
General-purpose classification where both false positives and false negatives are minimized. |
Adjust the threshold according to your use case.
Reference Accuracy#
The following table summarizes detection accuracy at the default and balanced thresholds, measured on an internal evaluation dataset that includes content from COSMOS, Sora, Midjourney, Veo, OmniAvatar, OVI, NVIDIA LipSync, NVIDIA Video Live Portrait, and NVIDIA Speech Live Portrait generators.
Threshold |
Synthetic Content Accuracy |
Real Content Accuracy |
|---|---|---|
0.3 (default) |
96% |
70% |
0.5 (balanced) |
85% |
82% |
At the default threshold of 0.3, the model achieves high accuracy on synthetic content (96%) at the cost of lower accuracy on real content (70%), reflecting the conservative design that prioritizes catching AI-generated videos. Raising the threshold to 0.5 yields a more balanced trade-off, with 85% accuracy on synthetic content and 82% on real content.
Note
These numbers are based on internal benchmarks and can vary depending on video quality, compression, and the generative model used to produce the content.
Model Caching#
When the container launches for the first time, it downloads the required models from NGC. To avoid downloading the models on subsequent runs, you can cache them locally by using a cache directory:
# Create the cache directory on the host machine
export LOCAL_NIM_CACHE=~/.cache/nim
mkdir -p "$LOCAL_NIM_CACHE"
chmod 777 $LOCAL_NIM_CACHE
# Choose manifest profile id based on target architecture.
export MANIFEST_PROFILE_ID=<enter_valid_manifest_profile_id>
# Run the container with the cache directory mounted
docker run -it --rm --name=synthetic-video-detector-nim \
--runtime=nvidia \
--gpus all \
-e NIM_MANIFEST_PROFILE=$MANIFEST_PROFILE_ID \
-e NGC_API_KEY=$NGC_API_KEY \
-e NIM_HTTP_API_PORT=8000 \
-p 8000:8000 \
-p 8001:8001 \
-v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
nvcr.io/nim/nvidia/synthetic-video-detector:latest
For more information about MANIFEST_PROFILE_ID, refer to Model Manifest Profiles.
SSL Enablement#
The Synthetic Video Detector NIM supports SSL/TLS to ensure secure communication between clients and the server by encrypting data in transit.
To enable SSL, provide the path to the SSL certificate and key files in the container. The following example shows how to do this:
export NGC_API_KEY=<add-your-api-key>
SSL_CERT=path/to/ssl_key
docker run -it --rm --name=synthetic-video-detector-nim \
--runtime=nvidia \
--gpus all \
--shm-size=16GB \
-v $SSL_CERT:/opt/nim/crt/:ro \
-e NGC_API_KEY=$NGC_API_KEY \
-p 8000:8000 \
-p 8001:8001 \
-e NIM_SSL_MODE="mtls" \
-e NIM_SSL_CA_CERTS_PATH="/opt/nim/crt/ssl_ca.pem" \
-e NIM_SSL_CERT_PATH="/opt/nim/crt/ssl_cert_server.pem" \
-e NIM_SSL_KEY_PATH="/opt/nim/crt/ssl_key_server.pem" \
nvcr.io/nim/nvidia/synthetic-video-detector:latest
NIM_SSL_MODE can be set to mtls, tls, or disabled. If set to mtls, the container uses mutual TLS authentication. If set to tls, the container uses TLS authentication.
Note
Verify the permissions of the SSL certificate and key files on the host machine. The container cannot access the files if they are not readable by the user running the container.
For more information, refer to Environment Variables.