Robust image classification model for detecting and managing AI-generated content.
Follow the steps below to download and run the NVIDIA NIM inference microservice for this model on your infrastructure of choice.
Get the credentials to download the models from Hive and export them:
export NIM_REPOSITORY_OVERRIDE="s3://..." export AWS_REGION="..." export AWS_ACCESS_KEY_ID="..." export AWS_SECRET_ACCESS_KEY="..."
Pull and run the NVIDIA NIM with the command below.
# Create the cache directory on the host machine. export LOCAL_NIM_CACHE=~/.cache/nim mkdir -p "$LOCAL_NIM_CACHE" chmod 777 $LOCAL_NIM_CACHE # Run the container with the cache directory as a volume mount. docker run -it --rm --name=nim-server \ --runtime=nvidia \ --gpus='"device=0"' \ -e NIM_REPOSITORY_OVERRIDE \ -e AWS_REGION \ -e AWS_ACCESS_KEY_ID \ -e AWS_SECRET_ACCESS_KEY \ -e NIM_HTTP_API_PORT=8003 \ -p 8003:8003 \ -p 8002:8002 \ -v "$LOCAL_NIM_CACHE:/opt/nim/.cache/" \ nvcr.io/nim/hive/ai-generated-image-detection:1.0.0
You can now make a local API call using this curl command:
invoke_url="http://localhost:8003/v1/infer" input_image_path="input.jpg" # download an example image curl https://assets.ngc.nvidia.com/products/api-catalog/sdxl/sdxl1.jpg > $input_image_path image_b64=$(base64 $input_image_path) length=${#image_b64} echo '{ "input": ["data:image/png;base64,'${image_b64}'"] }' > payload.json curl $invoke_url \ -H "Content-Type: application/json" \ -d @payload.json
For more details on getting started with this NIM, visit the NVIDIA NIM Docs.