
Vision language model that excels in understanding the physical world using structured reasoning on videos or images.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Translation model in 12 languages with few-shots example prompts capability.

Open reasoning model with 256K context window, native INT4 quantization and enhanced tool use

A state-of-the-art general purpose MoE VLM ideal for chat, agentic and instruction based use cases.

A general purpose VLM ideal for chat and instruction based use cases

Open Mixture of Experts LLM (230B, 10B active) for reasoning, coding, and tool-use/agent workflows

Nemotron Nano 12B v2 VL enables multi-image and video understanding, along with visual Q&A and summarization capabilities.

Record-setting accuracy and performance for Mandarin Taiwanese English transcriptions.

DeepSeek-V3.1: hybrid inference LLM with Think/Non-Think modes, stronger agents, 128K context, strict function calling.

Multilingual, cross-lingual embedding model for long-document QA retrieval, supporting 26 languages.

Record-setting accuracy and performance for Mandarin English transcriptions.

Accurate and optimized Spanish English transcriptions with punctuation and word timestamps.

Accurate and optimized Vietnamese-English transcriptions with punctuation and word timestamps.

Excels in agentic coding and browser use and supports 256K context, delivering top results.

DeepSeek V3.1 Instruct is a hybrid AI model with fast reasoning, 128K context, and strong tool use.

Stable Diffusion 3.5 is a popular text-to-image generation model

FLUX.1 Kontext is a multimodal model that enables in-context image generation and editing.

Reasoning vision language model (VLM) for physical AI and robotics.

Powerful OCR model for fast, accurate real-world image text extraction, layout, and structure analysis.

Smaller Mixture of Experts (MoE) text-only LLM for efficient AI reasoning and math

Mixture of Experts (MoE) reasoning LLM (text-only) designed to fit within 80GB GPU.

Multilingual, cross-lingual embedding model for long-document QA retrieval, supporting 26 languages.

Powerful OCR model for fast, accurate real-world image text extraction, layout, and structure analysis.

Generates high-quality numerical embeddings from text inputs.

ProteinMPNN is a deep learning model for predicting amino acid sequences for protein backbones.

Fine-tuned reranking model for multilingual, cross-lingual text question-answering retrieval, with long context support.

English text embedding model for question-answering retrieval.


Multilingual and cross-lingual text question-answering retrieval with long context support and optimized data storage efficiency.


An MOE LLM that follows instructions, completes requests, and generates creative text.

An MOE LLM that follows instructions, completes requests, and generates creative text.

A general purpose multimodal, multilingual 128 MoE model with 17B parameters.

A multimodal, multilingual 16 MoE model with 17B parameters.

Supports Chinese and English languages to handle tasks including chatbot, content generation, coding, and translation.

Powerful, multimodal language model designed for enterprise applications, including software development, data analysis, and reasoning.

Expressive and engaging text-to-speech, generated from a short audio sample.

Advanced state-of-the-art model with language understanding, superior reasoning, and text generation.


Generates physics-aware video world states for physical AI development using text prompts and multiple spatial control inputs derived from real-world data or simulation.

Record-setting accuracy and performance for English transcription.

Natural and expressive voices in multiple languages. For voice agents and brand ambassadors.

Enable smooth global interactions in 36 languages.

GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.

Multimodal question-answer retrieval representing user queries as text and documents as images.

Generates a multiple sequence alignment from a query sequence and a protein sequence database search.

Converts streamed audio to facial blendshapes for realtime lipsyncing and facial performances.

State-of-the-art accuracy and speed for English transcriptions.

FLUX.1 is a state-of-the-art suite of image generation models

FLUX.1-schnell is a distilled image generation model, producing high quality images at fast speeds

Advanced LLM for reasoning, math, general knowledge, and function calling

Powers complex conversations with superior contextual understanding, reasoning and text generation.

Expressive and engaging text-to-speech, generated from a short audio sample.

Cutting-edge vision-Language model exceling in high-quality reasoning from images.

Cutting-edge vision-language model exceling in high-quality reasoning from images.

The NV-EmbedCode model is a 7B Mistral-based embedding model optimized for code retrieval, supporting text, code, and hybrid queries.

A generative model of protein backbones for protein binder design.

Cutting-edge open multimodal model exceling in high-quality reasoning from images.

Cutting-edge open multimodal model exceling in high-quality reasoning from image and audio inputs.

Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.

Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments

Instruction tuned LLM achieving SoTA performance on reasoning, math and general knowledge capabilities

Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines.

Cutting-edge text generation model text understanding, transformation, and code generation.

NVIDIA DGX Cloud trained multilingual LLM designed for mission critical use cases in regulated industries including financial services, government, heavy industry

Cutting-edge text generation model text understanding, transformation, and code generation.

Multilingual LLM with emphasis on European languages supporting regulated use cases including financial services, government, heavy industry

Advanced small language generative AI model for edge applications

Support Chinese and English chat, coding, math, instruction following, solving quizzes

Long context cutting-edge lightweight open language model exceling in high-quality reasoning.

Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.

Cutting-edge lightweight open language model exceling in high-quality reasoning.

Cutting-edge lightweight open language model exceling in high-quality reasoning.

Cutting-edge lightweight open language model exceling in high-quality reasoning.

Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.

Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.

Model for writing and interacting with code across a wide range of programming languages and tasks.

Powers complex conversations with superior contextual understanding, reasoning and text generation.

Sovereign AI model finetuned on Traditional Mandarin and English data using the Llama-3 architecture.


Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.

High accuracy and optimized performance for transcription in 25 languages

Enable smooth global interactions in 36 languages.

Robust Speech Recognition via Large-Scale Weak Supervision.

Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics.

Grounding dino is an open vocabulary zero-shot object detection model.

FourCastNet predicts global atmospheric dynamics of various weather / climate variables.

Predicts the 3D structure of a protein from its amino acid sequence.

Predicts the 3D structure of a protein from its amino acid sequence.


Generates future frames of a physics-aware world state based on simply an image or short video prompt for physical AI development.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks.


Cutting-edge open multimodal model exceling in high-quality reasoning from images.

Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments

Advanced programming model for code completion, summarization, and generation

GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.

Verify compatibility of OpenUSD assets with instant RTX render and rule-based validation.

Advanced text-to-image model for generating high quality images

Visual Changenet detects pixel-level change maps between two images and outputs a semantic change segmentation mask

EfficientDet-based object detection network to detect 100 specific retail objects from an input video.