
A context‑aware safety model that applies reasoning to enforce domain‑specific policies.

Vision language model that excels in understanding the physical world using structured reasoning on videos or images.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Open, efficient MoE model with 1M context, excelling in coding, reasoning, instruction following, tool calling, and more

Translation model in 12 languages with few-shots example prompts capability.

State-of-the-art open code model with deep reasoning, 256k context, and unmatched efficiency.

Open reasoning model with 256K context window, native INT4 quantization and enhanced tool use

Distill and deploy domain-specific AI models from unstructured financial data to generate market signals efficiently—scaling your workflow with the NVIDIA Data Flywheel Blueprint for high-performance, cost-efficient experimentation.

Cutting-edge vision-language model exceling in retrieving text and metadata from images.

Leading multilingual content safety model for enhancing the safety and moderation capabilities of LLMs

DeepSeek-V3.1: hybrid inference LLM with Think/Non-Think modes, stronger agents, 128K context, strict function calling.

Multilingual, cross-lingual embedding model for long-document QA retrieval, supporting 26 languages.

Japanese-specialized large-language-model for enterprises to read and understand complex business documents.

State-of-the-art model for Polish language processing tasks such as text generation, Q&A, and chatbots.

80B parameter AI model with hybrid reasoning, MoE architecture, support for 119 languages.

DeepSeek V3.1 Instruct is a hybrid AI model with fast reasoning, 128K context, and strong tool use.

Stable Diffusion 3.5 is a popular text-to-image generation model

FLUX.1 Kontext is a multimodal model that enables in-context image generation and editing.

Reasoning vision language model (VLM) for physical AI and robotics.

Powerful OCR model for fast, accurate real-world image text extraction, layout, and structure analysis.

High efficiency model with leading accuracy for reasoning, tool calling, chat, and instruction following.

Multilingual, cross-lingual embedding model for long-document QA retrieval, supporting 26 languages.

Powerful OCR model for fast, accurate real-world image text extraction, layout, and structure analysis.

ProteinMPNN is a deep learning model for predicting amino acid sequences for protein backbones.

Fine-tuned reranking model for multilingual, cross-lingual text question-answering retrieval, with long context support.

English text embedding model for question-answering retrieval.

Advanced reasoing MOE mode excelling at reasoning, multilingual tasks, and instruction following

High efficiency model with leading accuracy for reasoning, tool calling, chat, and instruction following.

Lightweight reasoning model for applications in latency bound, memory/compute constrained environments

State-of-the-art open mixture-of-experts model with strong reasoning, coding, and agentic capabilities

An MOE LLM that follows instructions, completes requests, and generates creative text.

An MOE LLM that follows instructions, completes requests, and generates creative text.

Improve safety, security, and privacy of AI systems at build, deploy and run stages.

A general purpose multimodal, multilingual 128 MoE model with 17B parameters.

Build a custom enterprise research assistant powered by state-of-the-art models that process and synthesize multimodal data, enabling reasoning, planning, and refinement to generate comprehensive reports.

A multimodal, multilingual 16 MoE model with 17B parameters.

An edge computing AI model which accepts text, audio and image input, ideal for resource-constrained environments

An edge computing AI model which accepts text, audio and image input, ideal for resource-constrained environments

Powerful, multimodal language model designed for enterprise applications, including software development, data analysis, and reasoning.

High performance reasoning model optimized for efficiency and edge deployment

Small language model fine-tuned for improved reasoning, coding, and instruction-following

Power fast, accurate semantic search across multimodal enterprise data with NVIDIA’s RAG Blueprint—built on NeMo Retriever and Nemotron models—to connect your agents to trusted, authoritative sources of knowledge.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Advanced state-of-the-art model with language understanding, superior reasoning, and text generation.

Industry leading jailbreak classification model for protection from adversarial attempts

Multi-modal vision-language model that understands text/img and creates informative responses

Build a data flywheel, with NVIDIA NeMo microservices, that continuously optimizes AI agents for latency and cost — while maintaining accuracy targets.

Multi-modal model to classify safety for input prompts as well output responses.

State-of-the-art open model for reasoning, code, math, and tool calling - suitable for edge agents

Latency-optimized language model excelling in code, math, general knowledge, and instruction-following.

Leading reasoning and agentic AI accuracy model for PC and edge.

GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.

Automate and optimize the configuration of radio access network (RAN) parameters using agentic AI and a large language model (LLM)-driven framework.

State-of-the-art, multilingual model tailored to all 24 official European Union languages.


FLUX.1 is a state-of-the-art suite of image generation models

FLUX.1-schnell is a distilled image generation model, producing high quality images at fast speeds

Built for agentic workflows, this model excels in coding, instruction following, and function calling

Cutting-edge vision-language model exceling in retrieving text and metadata from images.

Cutting-edge vision-Language model exceling in high-quality reasoning from images.

Cutting-edge vision-language model exceling in high-quality reasoning from images.

The NV-EmbedCode model is a 7B Mistral-based embedding model optimized for code retrieval, supporting text, code, and hybrid queries.

Route LLM requests to the best model for the task at hand.

A generative model of protein backbones for protein binder design.

Cutting-edge open multimodal model exceling in high-quality reasoning from images.

Cutting-edge open multimodal model exceling in high-quality reasoning from image and audio inputs.

Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.

Cutting-edge text generation model text understanding, transformation, and code generation.

Cutting-edge text generation model text understanding, transformation, and code generation.

A lightweight, multilingual, advanced SLM text model for edge computing, resource constraint applications

Advanced small language generative AI model for edge applications

Fine-tuned Llama 3.1 70B model for code generation, summarization, and multi-language tasks.

Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.

Long context cutting-edge lightweight open language model exceling in high-quality reasoning.

Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.

Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.

Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.

Cutting-edge lightweight open language model exceling in high-quality reasoning.

Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.

Cutting-edge lightweight open language model exceling in high-quality reasoning.

Powerful mid-size code model with a 32K context length, excelling in coding in multiple languages.

Cutting-edge lightweight open language model exceling in high-quality reasoning.

State-of-the-art open model trained on open datasets, excelling in reasoning, math, and science.

Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.

Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.

Model for writing and interacting with code across a wide range of programming languages and tasks.

Powers complex conversations with superior contextual understanding, reasoning and text generation.

Sovereign AI model finetuned on Traditional Mandarin and English data using the Llama-3 architecture.

Sovereign AI model trained on Japanese language that understands regional nuances.

Sovereign AI model trained on Japanese language that understands regional nuances.

Sovereign AI model trained on Japanese language that understands regional nuances.

Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.

Efficient multimodal model excelling at multilingual tasks, image understanding, and fast-responses

Transform PDFs into AI podcasts for engaging on-the-go audio content.

Multi-lingual model supporting speech-to-text recognition and translation.

Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics.

Grounding dino is an open vocabulary zero-shot object detection model.

Leading content safety model for enhancing the safety and moderation capabilities of LLMs

Topic control model to keep conversations focused on approved topics, avoiding inappropriate content.

Advanced AI model detects faces and identifies deep fake images.

Robust image classification model for detecting and managing AI-generated content.


Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Multi-modal vision-language model that understands text/img/video and creates informative responses

Cutting-edge open multimodal model exceling in high-quality reasoning from images.

Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments

Detects jailbreaking, bias, violence, profanity, sexual content, and unethical behavior

Guardrail model to ensure that responses from LLMs are appropriate and safe

LLM to represent and serve the linguistic and cultural diversity of Southeast Asia

Advanced programming model for code completion, summarization, and generation

GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.

Leaderboard topping reward model supporting RLHF for better alignment with human preferences.

Advanced text-to-image model for generating high quality images

State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.