Most advanced language model for reasoning, code, multilingual tasks; runs on a single GPU.
Follow the steps below to download and run the NVIDIA NIM inference microservice for this model on your infrastructure of choice.
| ChatRTX | AnythingLLM | AI Toolkit for VSCode|
Install WSL2. For additional instructions refer to the documentation.
Once installed, open the NVIDIA-Workbench
WSL2 distro using the following command in the Windows terminal.
wsl -d NVIDIA-Workbench -u root
$ podman login nvcr.io Username: $oauthtoken Password: <PASTE_API_KEY_HERE>
Pull and run the NVIDIA NIM with the command below. This will download the optimized model for your infrastructure.
export NGC_API_KEY=<PASTE_API_KEY_HERE> export LOCAL_NIM_CACHE=~/.cache/nim mkdir -p "$LOCAL_NIM_CACHE" chmod -R a+w "$LOCAL_NIM_CACHE" podman run -it --rm \ --device nvidia.com/gpu=all \ --shm-size=16GB \ -e NGC_API_KEY=$NGC_API_KEY \ -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \ -e NIM_RELAX_MEM_CONSTRAINTS=1 \ -u $(id -u) \ -p 8000:8000 \ nvcr.io/nim/nv-mistralai/mistral-nemo-12b-instruct:1.8.0-rtx
You can now make a local API call using this curl command:
curl -X 'POST' \ 'http://0.0.0.0:8000/v1/chat/completions' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "model": "nvidia-mistralai/mistral-nemo-12b-instruct", "messages": [{"role":"user", "content":"Write a limerick about the wonders of GPU computing."}], "max_tokens": 64 }'
For more details on getting started with this NIM, visit the NVIDIA NIM Docs