Enable smooth global interactions in 36 languages.
An edge computing AI model which accepts text, audio and image input, ideal for resource-constrained environments
An edge computing AI model which accepts text, audio and image input, ideal for resource-constrained environments
Built for agentic workflows, this model excels in coding, instruction following, and function calling
Multi-modal vision-language model that understands text/img and creates informative responses
State-of-the-art model for Polish language processing tasks such as text generation, Q&A, and chatbots.
Small language model fine-tuned for improved reasoning, coding, and instruction-following
State-of-the-art, multilingual model tailored to all 24 official European Union languages.
SOTA LLM pre-trained for instruction following and proficiency in Indonesian language and its dialects.
Efficient multimodal model excelling at multilingual tasks, image understanding, and fast-responses
Powerful, multimodal language model designed for enterprise applications, including software development, data analysis, and reasoning.
High accuracy and optimized performance for transcription in 25 languages
A general purpose multimodal, multilingual 128 MoE model with 17B parameters.
A multimodal, multilingual 16 MoE model with 17B parameters.
Natural and expressive voices in multiple languages. For voice agents and brand ambassadors.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
A lightweight, multilingual, advanced SLM text model for edge computing, resource constraint applications
Cutting-edge vision-language model exceling in retrieving text and metadata from images.
Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments
Cutting-edge open multimodal model exceling in high-quality reasoning from image and audio inputs.
Latency-optimized language model excelling in code, math, general knowledge, and instruction-following.
Instruction tuned LLM achieving SoTA performance on reasoning, math and general knowledge capabilities
Multilingual LLM with emphasis on European languages supporting regulated use cases including financial services, government, heavy industry
Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.
Multi-modal vision-language model that understands text/img/video and creates informative responses
Advanced LLM for code generation, reasoning, and fixing across popular programming languages.
Powerful mid-size code model with a 32K context length, excelling in coding in multiple languages.
A bilingual Hindi-English SLM for on-device inference, tailored specifically for Hindi Language.
Llama-3.1-Nemotron-70B-Instruct is a large language model customized by NVIDIA in order to improve the helpfulness of LLM generated responses.
Efficient hybrid state-space model designed for conversational and reasoning tasks.
Sovereign AI model trained on Japanese language that understands regional nuances.
Sovereign AI model trained on Japanese language that understands regional nuances.
State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.
Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.
Cutting-edge vision-language model exceling in high-quality reasoning from images.
Cutting-edge vision-Language model exceling in high-quality reasoning from images.
Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.
Unique language model that delivers an unmatched accuracy-efficiency performance.
Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.
Fine-tuned Llama 3.1 70B model for code generation, summarization, and multi-language tasks.
Sovereign AI model finetuned on Traditional Mandarin and English data using the Llama-3 architecture.
Sovereign AI model trained on Japanese language that understands regional nuances.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
Cutting-edge MOE based LLM designed to excel in a wide array of generative AI tasks.
Cutting-edge MOE based LLM designed to excel in a wide array of generative AI tasks.
Optimized SLM for on-device inference and fine-tuned for roleplay, RAG and function calling
State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.
Advanced LLM based on Mixture of Experts architecure to deliver compute efficient content generation
Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments
Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.
Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.
Enable smooth global interactions in 36 languages.
Vision foundation model capable of performing diverse computer vision and vision language tasks.
Advanced small language generative AI model for edge applications
Verify compatibility of OpenUSD assets with instant RTX render and rule-based validation.
Supports Chinese and English languages to handle tasks including chatbot, content generation, coding, and translation.
Model for writing and interacting with code across a wide range of programming languages and tasks.
Support Chinese and English chat, coding, math, instruction following, solving quizzes
Powers complex conversations with superior contextual understanding, reasoning and text generation.
Advanced state-of-the-art model with language understanding, superior reasoning, and text generation.
Most advanced language model for reasoning, code, multilingual tasks; runs on a single GPU.
Cutting-edge lightweight open language model exceling in high-quality reasoning.
Cutting-edge text generation model text understanding, transformation, and code generation.
Cutting-edge text generation model text understanding, transformation, and code generation.
This LLM follows instructions, completes requests, and generates creative text.
Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics.
LLM for improved language comprehension and chatbot-oriented capabilities in Traditional Chinese.
Visual Changenet detects pixel-level change maps between two images and outputs a semantic change segmentation mask
Cutting-edge lightweight open language model exceling in high-quality reasoning.
Long context cutting-edge lightweight open language model exceling in high-quality reasoning.
Cutting-edge lightweight open language model exceling in high-quality reasoning.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
LLM to represent and serve the linguistic and cultural diversity of Southeast Asia
Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.
A general-purpose LLM with state-of-the-art performance in language understanding, coding, and RAG.
Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.
An MOE LLM that follows instructions, completes requests, and generates creative text.
Powers complex conversations with superior contextual understanding, reasoning and text generation.
Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.
Novel recurrent architecture based language model for faster inference when generating long sequences.
Cutting-edge model built on Google's Gemma-7B specialized for code generation and code completion.
Generate images and stunning visuals with realistic aesthetics.
This LLM follows instructions, completes requests, and generates creative text.
An MOE LLM that follows instructions, completes requests, and generates creative text.