
Nemotron Nano 12B v2 VL enables multi-image and video understanding, along with visual Q&A and summarization capabilities.

High‑efficiency LLM with hybrid Transformer‑Mamba design, excelling in reasoning and agentic tasks.

Mixture of Experts (MoE) reasoning LLM (text-only) designed to fit within 80GB GPU.

An edge computing AI model which accepts text, audio and image input, ideal for resource-constrained environments

An edge computing AI model which accepts text, audio and image input, ideal for resource-constrained environments

Powerful, multimodal language model designed for enterprise applications, including software development, data analysis, and reasoning.

A general purpose multimodal, multilingual 128 MoE model with 17B parameters.

A multimodal, multilingual 16 MoE model with 17B parameters.

Cutting-edge open multimodal model exceling in high-quality reasoning from images.

Cutting-edge open multimodal model exceling in high-quality reasoning from image and audio inputs.

NVIDIA DGX Cloud trained multilingual LLM designed for mission critical use cases in regulated industries including financial services, government, heavy industry

Cutting-edge vision-language model exceling in high-quality reasoning from images.

Cutting-edge vision-Language model exceling in high-quality reasoning from images.

Cutting-edge open multimodal model exceling in high-quality reasoning from images.

Cutting-edge MOE based LLM designed to excel in a wide array of generative AI tasks.

Verify compatibility of OpenUSD assets with instant RTX render and rule-based validation.

Visual Changenet detects pixel-level change maps between two images and outputs a semantic change segmentation mask

A generative model of protein backbones for protein binder design.