A general purpose multimodal, multilingual 128 MoE model with 17B parameters.
A multimodal, multilingual 16 MoE model with 17B parameters.
Connect AI applications to multimodal enterprise data with a scalable retrieval augmented generation (RAG) pipeline built on highly performant, industry-leading NIM microservices, for faster PDF data extraction and more accurate information retrieval.
Highly efficient Mixture of Experts model for RAG, summarization, entity extraction, and classification
Cutting-edge MOE based LLM designed to excel in a wide array of generative AI tasks.
Cutting-edge MOE based LLM designed to excel in a wide array of generative AI tasks.
Advanced LLM based on Mixture of Experts architecure to deliver compute efficient content generation
An MOE LLM that follows instructions, completes requests, and generates creative text.
An MOE LLM that follows instructions, completes requests, and generates creative text.