
StreamPETR offers efficient 3D object detection for autonomous driving by propagating sparse object queries temporally.

Cutting-edge vision-language model exceling in retrieving text and metadata from images.

Nemotron Nano 12B v2 VL enables multi-image and video understanding, along with visual Q&A and summarization capabilities.

Leading multilingual content safety model for enhancing the safety and moderation capabilities of LLMs

Record-setting accuracy and performance for Mandarin Taiwanese English transcriptions.

DeepSeek-V3.1: hybrid inference LLM with Think/Non-Think modes, stronger agents, 128K context, strict function calling.

Multilingual, cross-lingual embedding model for long-document QA retrieval, supporting 26 languages.

Japanese-specialized large-language-model for enterprises to read and understand complex business documents.

Qwen3-Next Instruct blends hybrid attention, sparse MoE, and stability boosts for ultra-long context AI.

State-of-the-art model for Polish language processing tasks such as text generation, Q&A, and chatbots.

80B parameter AI model with hybrid reasoning, MoE architecture, support for 119 languages.

Record-setting accuracy and performance for Mandarin English transcriptions.

Accurate and optimized Spanish English transcriptions with punctuation and word timestamps.

Accurate and optimized Vietnamese-English transcriptions with punctuation and word timestamps.

DeepSeek V3.1 Instruct is a hybrid AI model with fast reasoning, 128K context, and strong tool use.

High‑efficiency LLM with hybrid Transformer‑Mamba design, excelling in reasoning and agentic tasks.

Reasoning vision language model (VLM) for physical AI and robotics.

Powerful OCR model for fast, accurate real-world image text extraction, layout, and structure analysis.

Smaller Mixture of Experts (MoE) text-only LLM for efficient AI reasoning and math

Accurate and optimized English transcriptions with punctuation and word timestamps

High efficiency model with leading accuracy for reasoning, tool calling, chat, and instruction following.

Multilingual 7B LLM, instruction-tuned on all 24 EU languages for stable, culturally aligned output.

Multilingual, cross-lingual embedding model for long-document QA retrieval, supporting 26 languages.

Powerful OCR model for fast, accurate real-world image text extraction, layout, and structure analysis.

Expressive and engaging text-to-speech, generated from a short audio sample.

Translation model in 12 languages with few-shots example prompts capability.

Multi-modal model to classify safety for input prompts as well output responses.

Enable smooth global interactions in 36 languages.

An edge computing AI model which accepts text, audio and image input, ideal for resource-constrained environments

An edge computing AI model which accepts text, audio and image input, ideal for resource-constrained environments

GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.

Generates physics-aware video world states for physical AI development using text prompts and multiple spatial control inputs derived from real-world data or simulation.

Removes unwanted noises from audio improving speech intelligibility.

Multimodal question-answer retrieval representing user queries as text and documents as images.

Updated version of DeepSeek-R1 with enhanced reasoning, coding, math, and reduced hallucination.

Multi-modal vision-language model that understands text/img and creates informative responses

State-of-the-art open model for reasoning, code, math, and tool calling - suitable for edge agents

Expressive and engaging text-to-speech, generated from a short audio sample.

Advanced reasoing MOE mode excelling at reasoning, multilingual tasks, and instruction following

FLUX.1-schnell is a distilled image generation model, producing high quality images at fast speeds

State-of-the-art, multilingual model tailored to all 24 official European Union languages.

SOTA LLM pre-trained for instruction following and proficiency in Indonesian language and its dialects.

Efficient multimodal model excelling at multilingual tasks, image understanding, and fast-responses

High accuracy and optimized performance for transcription in 25 languages

Superior inference efficiency with highest accuracy for scientific and complex math reasoning, coding, tool calling, and instruction following.

Run computational-fluid dynamics (CFD) simulations



Generates future frames of a physics-aware world state based on simply an image or short video prompt for physical AI development.

End-to-end autonomous driving stack integrating perception, prediction, and planning with sparse scene representations for efficiency and safety.

High efficiency model with leading accuracy for reasoning, tool calling, chat, and instruction following.

Leading reasoning and agentic AI accuracy model for PC and edge.

Natural and expressive voices in multiple languages. For voice agents and brand ambassadors.

The NV-EmbedCode model is a 7B Mistral-based embedding model optimized for code retrieval, supporting text, code, and hybrid queries.

Distilled version of Llama 3.1 8B using reasoning data generated by DeepSeek R1 for enhanced performance.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Cutting-edge vision-language model exceling in retrieving text and metadata from images.

Distilled version of Qwen 2.5 32B using reasoning data generated by DeepSeek R1 for enhanced performance.

Distilled version of Qwen 2.5 14B using reasoning data generated by DeepSeek R1 for enhanced performance.

Distilled version of Qwen 2.5 7B using reasoning data generated by DeepSeek R1 for enhanced performance.

Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments

Robust Speech Recognition via Large-Scale Weak Supervision.

Multi-lingual model supporting speech-to-text recognition and translation.

State-of-the-art, high-efficiency LLM excelling in reasoning, math, and coding.

Topic control model to keep conversations focused on approved topics, avoiding inappropriate content.

Industry leading jailbreak classification model for protection from adversarial attempts

Leading content safety model for enhancing the safety and moderation capabilities of LLMs

NVIDIA DGX Cloud trained multilingual LLM designed for mission critical use cases in regulated industries including financial services, government, heavy industry


Multi-modal vision-language model that understands text/img/video and creates informative responses

Multilingual and cross-lingual text question-answering retrieval with long context support and optimized data storage efficiency.

Fine-tuned reranking model for multilingual, cross-lingual text question-answering retrieval, with long context support.

Context-aware chart extraction that can detect 18 classes for chart basic elements, excluding plot elements.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Converts streamed audio to facial blendshapes for realtime lipsyncing and facial performances.

FourCastNet predicts global atmospheric dynamics of various weather / climate variables.

Advanced AI model detects faces and identifies deep fake images.

A bilingual Hindi-English SLM for on-device inference, tailored specifically for Hindi Language.

Sovereign AI model trained on Japanese language that understands regional nuances.

Sovereign AI model trained on Japanese language that understands regional nuances.

Enhance speech by correcting common audio degradations to create studio quality speech output.

Leaderboard topping reward model supporting RLHF for better alignment with human preferences.

Robust image classification model for detecting and managing AI-generated content.

Sovereign AI model finetuned on Traditional Mandarin and English data using the Llama-3 architecture.

Sovereign AI model trained on Japanese language that understands regional nuances.

Cutting-edge MOE based LLM designed to excel in a wide array of generative AI tasks.

Optimized SLM for on-device inference and fine-tuned for roleplay, RAG and function calling

State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.

Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments

Grounding dino is an open vocabulary zero-shot object detection model.

Enable smooth global interactions in 36 languages.

Record-setting accuracy and performance for English transcription.

State-of-the-art accuracy and speed for English transcriptions.

Advanced small language generative AI model for edge applications

Estimate gaze angles of a person in a video and redirect to make it frontal.

Verify compatibility of OpenUSD assets with instant RTX render and rule-based validation.

Multilingual text reranking model.

English text embedding model for question-answering retrieval.

Multilingual text question-answering retrieval, transforming textual information into dense vector representations.


Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines.


Generates high-quality numerical embeddings from text inputs.

Visual Changenet detects pixel-level change maps between two images and outputs a semantic change segmentation mask

EfficientDet-based object detection network to detect 100 specific retail objects from an input video.

LLM to represent and serve the linguistic and cultural diversity of Southeast Asia

An MOE LLM that follows instructions, completes requests, and generates creative text.

GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.

This LLM follows instructions, completes requests, and generates creative text.


An MOE LLM that follows instructions, completes requests, and generates creative text.
