Leading content safety model for enhancing the safety and moderation capabilities of LLMs
Follow the steps below to download and run the NVIDIA NIM inference microservice for this model on your infrastructure of choice.
$ docker login nvcr.io Username: $oauthtoken Password: <PASTE_API_KEY_HERE>
Pull and run the NVIDIA NIM with the following command. This command downloads the optimized model for your infrastructure.
export NGC_API_KEY=<PASTE_API_KEY_HERE> export LOCAL_NIM_CACHE=~/.cache/contentsafety mkdir -p "$LOCAL_NIM_CACHE" docker run -it --rm \ --runtime=nvidia \ --gpus=all \ --shm-size=16GB \ -e NGC_API_KEY \ -e NIM_SERVED_MODEL_NAME="llama-3.1-nemoguard-8b-content-safety" \ -e NIM_CUSTOM_MODEL_NAME="llama-3.1-nemoguard-8b-content-safety" \ -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \ -u $(id -u) \ -p 8000:8000 \ nvcr.io/nim/nvidia/llama-3.1-nemoguard-8b-content-safety:latest
For more information about getting started with this NIM, refer to Llama 3.1 NemoGuard 8B ContentSafety NIM.