Expressive and engaging text-to-speech, generated from a short audio sample.
Translation model in 12 languages with few-shots example prompts capability.
Enable smooth global interactions in 36 languages.
GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.
Generates physics-aware video world states for physical AI development using text prompts and multiple spatial control inputs derived from real-world data or simulation.
Multimodal question-answer retrieval representing user queries as text and documents as images.
Multi-modal vision-language model that understands text/img and creates informative responses
State-of-the-art open model for reasoning, code, math, and tool calling - suitable for edge agents
Expressive and engaging text-to-speech, generated from a short audio sample.
High accuracy and optimized performance for transcription in 25 languages
Superior inference efficiency with highest accuracy for scientific and complex math reasoning, coding, tool calling, and instruction following.
Generalist model to generate future world state as videos from text and image prompts to create synthetic training data for robots and autonomous vehicles.
Generates future frames of a physics-aware world state based on simply an image or short video prompt for physical AI development.
End-to-end autonomous driving stack integrating perception, prediction, and planning with sparse scene representations for efficiency and safety.
High efficiency model with leading accuracy for reasoning, tool calling, chat, and instruction following.
Leading reasoning and agentic AI accuracy model for PC and edge.
Natural and expressive voices in multiple languages. For voice agents and brand ambassadors.
The NV-EmbedCode model is a 7B Mistral-based embedding model optimized for code retrieval, supporting text, code, and hybrid queries.
Model for object detection, fine-tuned to detect charts, tables, and titles in documents.
Model for object detection, fine-tuned to detect charts, tables, and titles in documents.
Model for object detection, fine-tuned to detect charts, tables, and titles in documents.
Cutting-edge vision-language model exceling in retrieving text and metadata from images.
Robust Speech Recognition via Large-Scale Weak Supervision.
Multi-lingual model supporting speech-to-text recognition and translation.
Multi-lingual model supporting speech-to-text recognition and translation.
Topic control model to keep conversations focused on approved topics, avoiding inappropriate content.
Industry leading jailbreak classification model for protection from adversarial attempts
Leading content safety model for enhancing the safety and moderation capabilities of LLMs
NVIDIA DGX Cloud trained multilingual LLM designed for mission critical use cases in regulated industries including financial services, government, heavy industry
Multi-modal vision-language model that understands text/img/video and creates informative responses
Multilingual and cross-lingual text question-answering retrieval with long context support and optimized data storage efficiency.
Fine-tuned reranking model for multilingual, cross-lingual text question-answering retrieval, with long context support.
Model for object detection, fine-tuned to detect charts, tables, and titles in documents.
Converts streamed audio to facial blendshapes for realtime lipsyncing and facial performances.
Automatic speech recognition model that transcribes speech in lower case English with record-setting accuracy and performance
FourCastNet predicts global atmospheric dynamics of various weather / climate variables.
A bilingual Hindi-English SLM for on-device inference, tailored specifically for Hindi Language.
Llama-3.1-Nemotron-70B-Instruct is a large language model customized by NVIDIA in order to improve the helpfulness of LLM generated responses.
Enhance speech by correcting common audio degradations to create studio quality speech output.
State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.
Leaderboard topping reward model supporting RLHF for better alignment with human preferences.
Unique language model that delivers an unmatched accuracy-efficiency performance.
Generates consistent characters across a series of images without requiring additional training.
Optimized SLM for on-device inference and fine-tuned for roleplay, RAG and function calling
State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.
Grounding dino is an open vocabulary zero-shot object detection model.
Natural, high-fidelity, English voices for personalizing text-to-speech services and voiceovers
Enable smooth global interactions in 36 languages.
Expressive and engaging English voices for Q&A assistants, brand ambassadors, and service robots
Record-setting accuracy and performance for English transcription.
State-of-the-art accuracy and speed for English transcriptions.
Estimate gaze angles of a person in a video and redirect to make it frontal.
Verify compatibility of OpenUSD assets with instant RTX render and rule-based validation.
Multilingual text reranking model.
English text embedding model for question-answering retrieval.
Multilingual text question-answering retrieval, transforming textual information into dense vector representations.
Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines.
Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines.
Generates high-quality numerical embeddings from text inputs.
Visual Changenet detects pixel-level change maps between two images and outputs a semantic change segmentation mask
EfficientDet-based object detection network to detect 100 specific retail objects from an input video.
GPU-accelerated generation of text embeddings used for question-answering retrieval.
GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.
This LLM follows instructions, completes requests, and generates creative text.
Run Google's DeepVariant optimized for GPU. Switch models for high accuracy on all major sequencers.