
A context‑aware safety model that applies reasoning to enforce domain‑specific policies.

Vision language model that excels in understanding the physical world using structured reasoning on videos or images.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

State-of-the-art 685B reasoning LLM with sparse attention, long context, and integrated agentic tools.

Open, efficient MoE model with 1M context, excelling in coding, reasoning, instruction following, tool calling, and more

Translation model in 12 languages with few-shots example prompts capability.

StreamPETR offers efficient 3D object detection for autonomous driving by propagating sparse object queries temporally.

Cutting-edge vision-language model exceling in retrieving text and metadata from images.

Nemotron Nano 12B v2 VL enables multi-image and video understanding, along with visual Q&A and summarization capabilities.

Leading multilingual content safety model for enhancing the safety and moderation capabilities of LLMs

Record-setting accuracy and performance for Mandarin Taiwanese English transcriptions.

DeepSeek-V3.1: hybrid inference LLM with Think/Non-Think modes, stronger agents, 128K context, strict function calling.

Multilingual, cross-lingual embedding model for long-document QA retrieval, supporting 26 languages.

Japanese-specialized large-language-model for enterprises to read and understand complex business documents.

Qwen3-Next Instruct blends hybrid attention, sparse MoE, and stability boosts for ultra-long context AI.

State-of-the-art model for Polish language processing tasks such as text generation, Q&A, and chatbots.

80B parameter AI model with hybrid reasoning, MoE architecture, support for 119 languages.

Record-setting accuracy and performance for Mandarin English transcriptions.

Accurate and optimized Spanish English transcriptions with punctuation and word timestamps.

Accurate and optimized Vietnamese-English transcriptions with punctuation and word timestamps.

DeepSeek V3.1 Instruct is a hybrid AI model with fast reasoning, 128K context, and strong tool use.

High‑efficiency LLM with hybrid Transformer‑Mamba design, excelling in reasoning and agentic tasks.

Reasoning vision language model (VLM) for physical AI and robotics.

Powerful OCR model for fast, accurate real-world image text extraction, layout, and structure analysis.

Smaller Mixture of Experts (MoE) text-only LLM for efficient AI reasoning and math

Accurate and optimized English transcriptions with punctuation and word timestamps

High efficiency model with leading accuracy for reasoning, tool calling, chat, and instruction following.

Multilingual 7B LLM, instruction-tuned on all 24 EU languages for stable, culturally aligned output.

Multilingual, cross-lingual embedding model for long-document QA retrieval, supporting 26 languages.

Powerful OCR model for fast, accurate real-world image text extraction, layout, and structure analysis.

Generates high-quality numerical embeddings from text inputs.

Fine-tuned reranking model for multilingual, cross-lingual text question-answering retrieval, with long context support.

English text embedding model for question-answering retrieval.

Advanced reasoing MOE mode excelling at reasoning, multilingual tasks, and instruction following


Multilingual and cross-lingual text question-answering retrieval with long context support and optimized data storage efficiency.

High efficiency model with leading accuracy for reasoning, tool calling, chat, and instruction following.

End-to-end autonomous driving stack integrating perception, prediction, and planning with sparse scene representations for efficiency and safety.


An MOE LLM that follows instructions, completes requests, and generates creative text.

An MOE LLM that follows instructions, completes requests, and generates creative text.

State-of-the-art, high-efficiency LLM excelling in reasoning, math, and coding.

An edge computing AI model which accepts text, audio and image input, ideal for resource-constrained environments

An edge computing AI model which accepts text, audio and image input, ideal for resource-constrained environments

Superior inference efficiency with highest accuracy for scientific and complex math reasoning, coding, tool calling, and instruction following.

Expressive and engaging text-to-speech, generated from a short audio sample.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Distilled version of Llama 3.1 8B using reasoning data generated by DeepSeek R1 for enhanced performance.

Industry leading jailbreak classification model for protection from adversarial attempts

Multi-modal vision-language model that understands text/img and creates informative responses

Multi-modal model to classify safety for input prompts as well output responses.

State-of-the-art open model for reasoning, code, math, and tool calling - suitable for edge agents

Generates physics-aware video world states for physical AI development using text prompts and multiple spatial control inputs derived from real-world data or simulation.

Leading reasoning and agentic AI accuracy model for PC and edge.

Record-setting accuracy and performance for English transcription.

Natural and expressive voices in multiple languages. For voice agents and brand ambassadors.

Enable smooth global interactions in 36 languages.

GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.

Multimodal question-answer retrieval representing user queries as text and documents as images.

SOTA LLM pre-trained for instruction following and proficiency in Indonesian language and its dialects.

State-of-the-art, multilingual model tailored to all 24 official European Union languages.

Converts streamed audio to facial blendshapes for realtime lipsyncing and facial performances.

Removes unwanted noises from audio improving speech intelligibility.


State-of-the-art accuracy and speed for English transcriptions.

Enhance speech by correcting common audio degradations to create studio quality speech output.

FLUX.1-schnell is a distilled image generation model, producing high quality images at fast speeds

Expressive and engaging text-to-speech, generated from a short audio sample.

Updated version of DeepSeek-R1 with enhanced reasoning, coding, math, and reduced hallucination.

Cutting-edge vision-language model exceling in retrieving text and metadata from images.

The NV-EmbedCode model is a 7B Mistral-based embedding model optimized for code retrieval, supporting text, code, and hybrid queries.

Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments

Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines.

NVIDIA DGX Cloud trained multilingual LLM designed for mission critical use cases in regulated industries including financial services, government, heavy industry

Distilled version of Qwen 2.5 14B using reasoning data generated by DeepSeek R1 for enhanced performance.

Distilled version of Qwen 2.5 32B using reasoning data generated by DeepSeek R1 for enhanced performance.

Advanced small language generative AI model for edge applications

Distilled version of Qwen 2.5 7B using reasoning data generated by DeepSeek R1 for enhanced performance.

A bilingual Hindi-English SLM for on-device inference, tailored specifically for Hindi Language.

This LLM follows instructions, completes requests, and generates creative text.

Sovereign AI model finetuned on Traditional Mandarin and English data using the Llama-3 architecture.

Cutting-edge MOE based LLM designed to excel in a wide array of generative AI tasks.


Sovereign AI model trained on Japanese language that understands regional nuances.

Sovereign AI model trained on Japanese language that understands regional nuances.

Sovereign AI model trained on Japanese language that understands regional nuances.

Efficient multimodal model excelling at multilingual tasks, image understanding, and fast-responses

High accuracy and optimized performance for transcription in 25 languages

Enable smooth global interactions in 36 languages.

Robust Speech Recognition via Large-Scale Weak Supervision.

Multi-lingual model supporting speech-to-text recognition and translation.

Grounding dino is an open vocabulary zero-shot object detection model.


Run computational-fluid dynamics (CFD) simulations


Leading content safety model for enhancing the safety and moderation capabilities of LLMs

Topic control model to keep conversations focused on approved topics, avoiding inappropriate content.

FourCastNet predicts global atmospheric dynamics of various weather / climate variables.

Advanced AI model detects faces and identifies deep fake images.

Robust image classification model for detecting and managing AI-generated content.

Estimate gaze angles of a person in a video and redirect to make it frontal.


Generates future frames of a physics-aware world state based on simply an image or short video prompt for physical AI development.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Multi-modal vision-language model that understands text/img/video and creates informative responses

Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments

LLM to represent and serve the linguistic and cultural diversity of Southeast Asia

GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.

Context-aware chart extraction that can detect 18 classes for chart basic elements, excluding plot elements.

Verify compatibility of OpenUSD assets with instant RTX render and rule-based validation.

Leaderboard topping reward model supporting RLHF for better alignment with human preferences.

Visual Changenet detects pixel-level change maps between two images and outputs a semantic change segmentation mask

EfficientDet-based object detection network to detect 100 specific retail objects from an input video.

Optimized SLM for on-device inference and fine-tuned for roleplay, RAG and function calling

State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.