nvidia/nvclip
RUN ANYWHERE
NV-CLIP is a multimodal embeddings model for image and text.
By running the below commands, you accept the NVIDIA AI Enterprise Terms of Use and the NVIDIA Community Models License.
Pull and run nvidia/nvclip
using Docker (this will download the full model and run it in your local environment)
$ docker login nvcr.io Username: $oauthtoken Password: <PASTE_API_KEY_HERE>
You can choose to provide the optimized profile id for your hardware from here and export it with the command below:
export NIM_MANIFEST_PROFILE=<PROFILE_ID>
Pull and run the NVIDIA NIM with the command below.
export NGC_API_KEY=<PASTE_API_KEY_HERE> export LOCAL_NIM_CACHE=~/.cache/nim mkdir -p "$LOCAL_NIM_CACHE" docker run -it --rm \ --gpus all \ -e NGC_API_KEY=$NGC_API_KEY \ -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \ -u $(id -u) \ -p 8000:8000 \ -e NIM_MANIFEST_PROFILE=$NIM_MANIFEST_PROFILE nvcr.io/nim/nvidia/nvclip:1.0.0
You can now make a local API call using this curl command:
curl -X POST 'http://0.0.0.0:8000/v1/embeddings' \ -H "Content-Type: application/json" \ -d '{ "input": ["The quick brown fox jumped over the lazy dog", "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAIAAACQd1PeAAAAEElEQVR4nGK6HcwNCAAA//8DTgE8HuxwEQAAAABJRU5ErkJggg==", ], "model": "nvidia/nvclip-vit-h-14", "encoding_format": "float" }'
For more details on getting started with this NIM, visit the NVIDIA NIM Docs.