Advanced state-of-the-art model with language understanding, superior reasoning, and text generation.
Follow the steps below to download and run the NVIDIA NIM inference microservice for this model on your infrastructure of choice.
In Windows, open the Task Manager. Select the Performance tab and click on CPU. Check if Virtualization is enabled. If it is disabled, see here to enable.
Install WSL2. For additional instructions refer to the documentation.
Once installed, open the NVIDIA-Workbench WSL2 distro using the following command in the Windows terminal.
wsl -d NVIDIA-Workbench
Export your personal credentials as environment variables:
export NGC_API_KEY=<PASTE_API_KEY_HERE>
Login to NVIDIA NGC so that you can pull the NIM container:
echo "$NGC_API_KEY" | podman login nvcr.io --username '$oauthtoken' --password-stdin
Pull and run the NVIDIA NIM with the command below. This will download the optimized model for your infrastructure.
export LOCAL_NIM_CACHE=~/.cache/nim
mkdir -p "$LOCAL_NIM_CACHE"
chmod -R a+w "$LOCAL_NIM_CACHE"
podman run -it --rm \
--device nvidia.com/gpu=all \
--shm-size=8GB \
-e NGC_API_KEY=$NGC_API_KEY \
-v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
-e NIM_RELAX_MEM_CONSTRAINTS=1 \
-u $(id -u) \
-p 8000:8000 \
nvcr.io/nim/meta/llama-3.1-8b-instruct:1.8.0-RTX
You can now make a local API call using this curl command:
curl -X 'POST' \
'http://0.0.0.0:8000/v1/chat/completions' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"model": "meta/llama-3.1-8b-instruct",
"messages": [{"role":"user", "content":"Hello! How are you?"}],
"max_tokens": 64
}'
For more details on getting started with this NIM, visit the NVIDIA NIM Docs.