
NVIDIA Synthetic Video Detector is an AI-powered micro-service for detecting AI‑generated (synthetic) videos.
Follow the steps below to download and run the NVIDIA NIM inference microservice for this model on your infrastructure of choice.
NVIDIA Synthetic Video Detector NIM is available only through the AI for Media Private Access Program. Joining the Private Access Program gives you an NGC API key with the permissions required for this NIM.
NVIDIA Synthetic Video Detector NIM uses gRPC APIs for inferencing requests.
A NGC API Key is required to download the appropriate models and resources when starting the NIM. Pass the value of the API Key to the docker run command in the next section as the NGC_API_KEY environment variable as indicated.
If you are not familiar with how to create the NGC_API_KEY environment variable, the simplest way is to export it in your terminal:
export NGC_API_KEY=<PASTE_API_KEY_HERE>
Run one of the following commands to make the key available at startup:
# If using bash
echo "export NGC_API_KEY=<value>" >> ~/.bashrc
# If using zsh
echo "export NGC_API_KEY=<value>" >> ~/.zshrc
Other, more secure options include saving the value in a file, so that you can retrieve with cat $NGC_API_KEY_FILE, or using a password manager.
To pull the NIM container image from NGC, first authenticate with the NVIDIA Container Registry with the following command:
echo "$NGC_API_KEY" | docker login nvcr.io --username '$oauthtoken' --password-stdin
The following command launches a container with the gRPC service.
docker run -it --rm --name=synthetic-video-detector-nim \
--runtime=nvidia \
--gpus all \
--shm-size=8GB \
-e NGC_API_KEY=$NGC_API_KEY \
-e NIM_HTTP_API_PORT=8000 \
-p 8000:8000 \
-p 8001:8001 \
nvcr.io/nim/nvidia/synthetic-video-detector:latest
Please note, the flag --gpus all is used to assign all available GPUs to the docker container.
To assign specific GPU to the docker container (in case of multiple GPUs available in your machine) use --gpus '"device=0,1,2..."'
If the command runs successfully, you get a response similar to the following.
[INFO AI4M BASE LOGGER 2025-12-08 06:49:53.779 PID:1] SUCCESS: DINOv2+v3 TensorRT inference engine initialized!
[INFO AI4M BASE LOGGER 2025-12-08 06:49:53.779 PID:1] Service initialization complete
[INFO AI4M BASE LOGGER 2025-12-08 06:49:53.779 PID:1] Using threading mode for gRPC service
[INFO AI4M BASE LOGGER 2025-12-08 06:49:53.779 PID:1] Starting threading gRPC service with 1 threads
[INFO AI4M BASE LOGGER 2025-12-08 06:49:53.787 PID:1] Using Insecure Server Credentials
[INFO AI4M BASE LOGGER 2025-12-08 06:49:53.789 PID:1] Listening to 0.0.0.0:8001
By default Synthetic Video Detector gRPC service is hosted on port 8001. You will have to use this port for inferencing requests.
We have provided a sample client script file in our GitHub repo. The script could be used to invoke the Docker container using the following instructions.
Download the Synthetic Video Detector Python client code by cloning the NIM Client Repository:
git clone https://github.com/NVIDIA-Maxine/nim-clients.git
cd nim-clients/synthetic-video-detector
Install the dependencies for the Synthetic Video Detector gRPC client:
sudo apt-get install python3-pip
pip install -r requirements.txt
Go to scripts directory
cd scripts
Run the command to send gRPC request
python synthetic-video-detector.py --target <target_ip:port> --video-input <input file path> --save-csv
Example command with sample input:
python synthetic-video-detector.py --target 127.0.0.1:8001 --video-input ../assets/fake_sample_video.mp4 --save-csv
For more details on getting started with this NIM including configuring using parameters, visit the NVIDIA Synthetic Video Detector NIM Docs.