Connect AI applications to multimodal enterprise data with a scalable retrieval augmented generation (RAG) pipeline built on highly performant, industry-leading NIM microservices, for faster PDF data extraction and more accurate information retrieval.
Advanced Small Language Model supporting RAG, summarization, classification, code, and agentic AI
Highly efficient Mixture of Experts model for RAG, summarization, entity extraction, and classification
Optimized SLM for on-device inference and fine-tuned for roleplay, RAG and function calling
A general-purpose LLM with state-of-the-art performance in language understanding, coding, and RAG.