
meta
llama-3.1-8b-instruct
RUN ANYWHEREAdvanced state-of-the-art model with language understanding, superior reasoning, and text generation.
Follow the steps below to download and run the NVIDIA NIM inference microservice for this model on your infrastructure of choice.
Prerequisites
- NVIDIA GeForce RTX 4080 or above (see supported GPUs)
- Install the latest NVIDIA GPU Driver on Windows (Version 570+)
- Ensure virtualization is enabled in the system BIOS. In Windows, open Task Manager, select the Performance tab, and find Virtualization. If Disabled, see here to enable.
Experience via App
| ChatRTX | AnythingLLM | AI Toolkit for VSCode|
Open the Windows Subsystem for Linux 2 - WSL2 - Distro
Install WSL2. For additional instructions refer to the documentation.
Once installed, open the NVIDIA-Workbench
WSL2 distro using the following command in the Windows terminal.
wsl -d NVIDIA-Workbench
Run the Container
$ podman login nvcr.io Username: $oauthtoken Password: <PASTE_API_KEY_HERE>
Pull and run the NVIDIA NIM with the command below. This will download the optimized model for your infrastructure.
export NGC_API_KEY=<PASTE_API_KEY_HERE> export LOCAL_NIM_CACHE=~/.cache/nim mkdir -p "$LOCAL_NIM_CACHE" chmod -R a+w "$LOCAL_NIM_CACHE" podman run -it --rm \ --device nvidia.com/gpu=all \ --shm-size=8GB \ -e NGC_API_KEY=$NGC_API_KEY \ -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \ -e NIM_RELAX_MEM_CONSTRAINTS=1 \ -u $(id -u) \ -p 8000:8000 \ nvcr.io/nim/meta/llama-3.1-8b-instruct:1.8.0-RTX
Test the NIM
You can now make a local API call using this curl command:
curl -X 'POST' \ 'http://0.0.0.0:8000/v1/chat/completions' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "model": "meta/llama-3.1-8b-instruct", "messages": [{"role":"user", "content":"Hello! How are you?"}], "max_tokens": 64 }'
For more details on getting started with this NIM, visit the NVIDIA NIM Docs.