
nvidia
llama-3.1-nemotron-nano-8b-v1
Run AnywhereLeading reasoning and agentic AI accuracy model for PC and edge.
Follow the steps below to download and run the NVIDIA NIM inference microservice for this model on your infrastructure of choice.
Generate API Key
Pull and Run the NIM
$ docker login nvcr.io Username: $oauthtoken Password: <PASTE_API_KEY_HERE>
Pull and run the NVIDIA NIM with the command below. This will download the optimized model for your infrastructure.
export NGC_API_KEY=<PASTE_API_KEY_HERE> export LOCAL_NIM_CACHE=~/.cache/nim mkdir -p "$LOCAL_NIM_CACHE" docker run -it --rm \ --gpus all \ --shm-size=16GB \ -e NGC_API_KEY \ -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \ -u $(id -u) \ -p 8000:8000 \ nvcr.io/nim/nvidia/llama-3.1-nemotron-nano-8b-v1:latest
Test the NIM
You can now make a local API call using this curl command:
curl -X 'POST' \ 'http://0.0.0.0:8000/v1/chat/completions' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "model": "nvidia/llama-3.1-nemotron-nano-8b-v1", "messages": [{"role":"user", "content":"Explain how a transformer neural network works."}], "max_tokens": 64 }'
For more details on getting started with this NIM, visit the NVIDIA NIM Docs.