institute-of-science-tokyo/llama-3.1-swallow-70b-instruct-v0.1
RUN ANYWHERE
Sovereign AI model trained on Japanese language that understands regional nuances.
By running the below commands, you accept the NVIDIA AI Enterprise Terms of Use and the NVIDIA Community Models License.
Pull and run institute-of-science-tokyo/llama-3-1-swallow-70b-instruct-v01
using Docker (this will download the full model and run it in your local environment)
$ docker login nvcr.io Username: $oauthtoken Password: <PASTE_API_KEY_HERE>
Pull and run the NVIDIA NIM with the command below. This will download the optimized model for your infrastructure.
export NGC_API_KEY=<PASTE_API_KEY_HERE> export LOCAL_NIM_CACHE=~/.cache/nim mkdir -p "$LOCAL_NIM_CACHE" docker run -it --rm \ --gpus all \ --shm-size=16GB \ -e NGC_API_KEY \ -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \ -u $(id -u) \ -p 8000:8000 \ nvcr.io/nim/tokyotech-llm/llama-3.1-swallow-70b-instruct-v0.1:latest
You can now make a local API call using this curl command:
curl -X 'POST' \ 'http://0.0.0.0:8000/v1/chat/completions' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "model": "tokyotech-llm/llama-3.1-swallow-70b-instruct-v0.1", "messages": [{"role":"user", "content":"東京の夜空に打ち上がっている花火の下、向かい合っている燕とラマの温かい物語を書いてください。"}], "max_tokens": 64 }'
For more details on getting started with this NIM, visit the NVIDIA NIM Docs.