mistralai

mistral-7b-instruct-v0.3

Run Anywhere

This LLM follows instructions, completes requests, and generates creative text.

Deploying your application in production? Get started with a 90-day evaluation of NVIDIA AI Enterprise

Follow the steps below to download and run the NVIDIA NIM inference microservice for this model on your infrastructure of choice.

Generate API Key

Pull and Run the NIM

$ docker login nvcr.io Username: $oauthtoken Password: <PASTE_API_KEY_HERE>

Pull and run the NVIDIA NIM with the command below. This will download the optimized model for your infrastructure.

export NGC_API_KEY=<PASTE_API_KEY_HERE> export LOCAL_NIM_CACHE=~/.cache/nim mkdir -p "$LOCAL_NIM_CACHE" docker run -it --rm \ --gpus all \ --shm-size=16GB \ -e NGC_API_KEY \ -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \ -u $(id -u) \ -p 8000:8000 \ nvcr.io/nim/mistralai/mistral-7b-instruct-v0.3:latest

Test the NIM

You can now make a local API call using this curl command:

curl -X 'POST' \ 'http://0.0.0.0:8000/v1/chat/completions' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "model": "mistralai/mistral-7b-instruct-v0.3", "messages": [{"role":"user", "content":"Write a limerick about the wonders of GPU computing."}], "max_tokens": 64 }'

For more details on getting started with this NIM, visit the NVIDIA NIM Docs.