A general purpose multimodal, multilingual 128 MoE model with 17B parameters.
A multimodal, multilingual 16 MoE model with 17B parameters.
The NV-EmbedCode model is a 7B Mistral-based embedding model optimized for code retrieval, supporting text, code, and hybrid queries.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
Cutting-edge open multimodal model exceling in high-quality reasoning from image and audio inputs.
Connect AI applications to multimodal enterprise data with a scalable retrieval augmented generation (RAG) pipeline built on highly performant, industry-leading NIM microservices, for faster PDF data extraction and more accurate information retrieval.
Multilingual and cross-lingual text question-answering retrieval with long context support and optimized data storage efficiency.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
Vision foundation model capable of performing diverse computer vision and vision language tasks.
English text embedding model for question-answering retrieval.
Multilingual text question-answering retrieval, transforming textual information into dense vector representations.
Generates high-quality numerical embeddings from text inputs.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
Optimized community model for text embedding.
GPU-accelerated generation of text embeddings used for question-answering retrieval.