
nvidia/llama-3.1-nemoguard-8b-topic-control
RUN ANYWHERE
Topic control model to keep conversations focused on approved topics, avoiding inappropriate content.
By running the below commands, you accept the NVIDIA AI Enterprise Terms of Use and the NVIDIA Community Models License.
Pull and run nvidia/llama-3_1-nemoguard-8b-topic-control
using Docker (this will download the full model and run it in your local environment)
$ docker login nvcr.io Username: $oauthtoken Password: <PASTE_API_KEY_HERE>
Pull and run the NVIDIA NIM with the following command. This command downloads the optimized model for your infrastructure.
export NGC_API_KEY=<PASTE_API_KEY_HERE> export LOCAL_NIM_CACHE=~/.cache/llama-nemoguard-topiccontrol mkdir -p "$LOCAL_NIM_CACHE" docker run -it --rm \ --runtime=nvidia \ --gpus=all \ --shm-size=16GB \ -e NGC_API_KEY \ -e NIM_SERVED_MODEL_NAME="llama-3.1-nemoguard-8b-topic-control" \ -e NIM_CUSTOM_MODEL_NAME="llama-3.1-nemoguard-8b-topic-control" \ -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \ -u $(id -u) \ -p 8000:8000 \ nvcr.io/nim/nvidia/llama-3.1-nemoguard-8b-topic-control:latest
For more information about getting started with this NIM, refer to Llama 3.1 NemoGuard 8B TopicControl NIM.