
A context‑aware safety model that applies reasoning to enforce domain‑specific policies.

Vision language model that excels in understanding the physical world using structured reasoning on videos or images.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Open, efficient MoE model with 1M context, excelling in coding, reasoning, instruction following, tool calling, and more

Translation model in 12 languages with few-shots example prompts capability.

StreamPETR offers efficient 3D object detection for autonomous driving by propagating sparse object queries temporally.

Cutting-edge vision-language model exceling in retrieving text and metadata from images.

Nemotron Nano 12B v2 VL enables multi-image and video understanding, along with visual Q&A and summarization capabilities.

Leading multilingual content safety model for enhancing the safety and moderation capabilities of LLMs

Record-setting accuracy and performance for Mandarin Taiwanese English transcriptions.

Multilingual, cross-lingual embedding model for long-document QA retrieval, supporting 26 languages.

Record-setting accuracy and performance for Mandarin English transcriptions.

Accurate and optimized Spanish English transcriptions with punctuation and word timestamps.

Accurate and optimized Vietnamese-English transcriptions with punctuation and word timestamps.

High‑efficiency LLM with hybrid Transformer‑Mamba design, excelling in reasoning and agentic tasks.

Reasoning vision language model (VLM) for physical AI and robotics.

Powerful OCR model for fast, accurate real-world image text extraction, layout, and structure analysis.

Accurate and optimized English transcriptions with punctuation and word timestamps

High efficiency model with leading accuracy for reasoning, tool calling, chat, and instruction following.

Multilingual, cross-lingual embedding model for long-document QA retrieval, supporting 26 languages.

Powerful OCR model for fast, accurate real-world image text extraction, layout, and structure analysis.

Generates high-quality numerical embeddings from text inputs.

Fine-tuned reranking model for multilingual, cross-lingual text question-answering retrieval, with long context support.

English text embedding model for question-answering retrieval.


Multilingual and cross-lingual text question-answering retrieval with long context support and optimized data storage efficiency.

High efficiency model with leading accuracy for reasoning, tool calling, chat, and instruction following.

End-to-end autonomous driving stack integrating perception, prediction, and planning with sparse scene representations for efficiency and safety.


Superior inference efficiency with highest accuracy for scientific and complex math reasoning, coding, tool calling, and instruction following.

Expressive and engaging text-to-speech, generated from a short audio sample.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Industry leading jailbreak classification model for protection from adversarial attempts

Multi-modal vision-language model that understands text/img and creates informative responses

State-of-the-art open model for reasoning, code, math, and tool calling - suitable for edge agents

Generates physics-aware video world states for physical AI development using text prompts and multiple spatial control inputs derived from real-world data or simulation.

Leading reasoning and agentic AI accuracy model for PC and edge.

Record-setting accuracy and performance for English transcription.

Natural and expressive voices in multiple languages. For voice agents and brand ambassadors.

Enable smooth global interactions in 36 languages.

GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.

Multimodal question-answer retrieval representing user queries as text and documents as images.

Converts streamed audio to facial blendshapes for realtime lipsyncing and facial performances.

Removes unwanted noises from audio improving speech intelligibility.


State-of-the-art accuracy and speed for English transcriptions.

Enhance speech by correcting common audio degradations to create studio quality speech output.

Expressive and engaging text-to-speech, generated from a short audio sample.

Cutting-edge vision-language model exceling in retrieving text and metadata from images.

The NV-EmbedCode model is a 7B Mistral-based embedding model optimized for code retrieval, supporting text, code, and hybrid queries.

Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines.

NVIDIA DGX Cloud trained multilingual LLM designed for mission critical use cases in regulated industries including financial services, government, heavy industry

A bilingual Hindi-English SLM for on-device inference, tailored specifically for Hindi Language.

This LLM follows instructions, completes requests, and generates creative text.


High accuracy and optimized performance for transcription in 25 languages

Enable smooth global interactions in 36 languages.

Robust Speech Recognition via Large-Scale Weak Supervision.

Multi-lingual model supporting speech-to-text recognition and translation.

Grounding dino is an open vocabulary zero-shot object detection model.

Leading content safety model for enhancing the safety and moderation capabilities of LLMs

Topic control model to keep conversations focused on approved topics, avoiding inappropriate content.

FourCastNet predicts global atmospheric dynamics of various weather / climate variables.

Estimate gaze angles of a person in a video and redirect to make it frontal.


Generates future frames of a physics-aware world state based on simply an image or short video prompt for physical AI development.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Multi-modal vision-language model that understands text/img/video and creates informative responses

GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.

Verify compatibility of OpenUSD assets with instant RTX render and rule-based validation.

Leaderboard topping reward model supporting RLHF for better alignment with human preferences.

Visual Changenet detects pixel-level change maps between two images and outputs a semantic change segmentation mask

EfficientDet-based object detection network to detect 100 specific retail objects from an input video.

Optimized SLM for on-device inference and fine-tuned for roleplay, RAG and function calling

State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.