NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2025 NVIDIA Corporation

nvidia

llama-3.1-nemoguard-8b-content-safety

Run Anywhere

Leading content safety model for enhancing the safety and moderation capabilities of LLMs

Content safetyGuard modelLLM safetycontent moderation
Get API Key
API Reference
Accelerated by DGX Cloud
Deploying your application in production? Get started with a 90-day evaluation of NVIDIA AI Enterprise

Follow the steps below to download and run the NVIDIA NIM inference microservice for this model on your infrastructure of choice.

Step 1
Generate API Key

Step 2
Pull and Run the NIM

$ docker login nvcr.io
Username: $oauthtoken
Password: <PASTE_API_KEY_HERE>

Pull and run the NVIDIA NIM with the following command. This command downloads the optimized model for your infrastructure.

export NGC_API_KEY=<PASTE_API_KEY_HERE>
export LOCAL_NIM_CACHE=~/.cache/contentsafety
mkdir -p "$LOCAL_NIM_CACHE"
docker run -it --rm \
    --runtime=nvidia \
    --gpus=all \
    --shm-size=16GB \
    -e NGC_API_KEY \
    -e NIM_SERVED_MODEL_NAME="llama-3.1-nemoguard-8b-content-safety" \
    -e NIM_CUSTOM_MODEL_NAME="llama-3.1-nemoguard-8b-content-safety" \
    -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
    -u $(id -u) \
    -p 8000:8000 \
    nvcr.io/nim/nvidia/llama-3.1-nemoguard-8b-content-safety:latest

Step 3
Test the NIM

Checking content safety requires careful set up of a prompt and parsing the model response. Refer to Running Inference in the product documentation.

For more information about getting started with this NIM, refer to Llama 3.1 NemoGuard 8B ContentSafety NIM.