Publisher
Use Case
NIM Type
Sorting by Most Recent
meta / 
esm2-650m

Generates embeddings of proteins from their amino acid sequences.

microsoft / 
phi-3.5-vision-instruct

Cutting-edge open multimodal model exceling in high-quality reasoning from images.

nvidia / 
nv-dinov2

NV-DINOv2 is a visual foundation model that generates vector embeddings for the input image.

microsoft / 
florence-2

Vision foundation model capable of performing diverse computer vision and vision language tasks.

nvidia / 
nv-embedqa-e5-v5

GPU-accelerated generation of text embeddings used for question-answering retrieval.

nvidia / 
nv-embedqa-mistral-7b-v2

GPU-accelerated generation of text embeddings used for question-answering retrieval.

nvidia / 
nvclip

NV-CLIP is a multimodal embeddings model for image and text.

nvidia / 
nv-embed-v1

Generates high-quality numerical embeddings from text inputs.

baai / 
bge-m3

Embedding model for text retrieval tasks, excelling in dense, multi-vector, and sparse retrieval.

microsoft / 
phi-3-vision-128k-instruct

Cutting-edge open multimodal model exceling in high-quality reasoning from images.

snowflake / 
arctic-embed-l

GPU-accelerated generation of text embeddings.

nvidia / 
embed-qa-4

GPU-accelerated generation of text embeddings used for question-answering retrieval.

microsoft / 
kosmos-2

Groundbreaking multimodal model designed to understand and reason about visual elements in images.

google / 
deplot

One-shot visual language understanding model that translates images of plots into tables.

adept / 
fuyu-8b

Multi-modal model for a wide range of tasks, including image understanding and language generation.