Topic control model to keep conversations focused on approved topics, avoiding inappropriate content.
Industry leading jailbreak classification model for protection from adversarial attempts
Leading content safety model for enhancing the safety and moderation capabilities of LLMs
NVIDIA DGX Cloud trained multilingual LLM designed for mission critical use cases in regulated industries including financial services, government, heavy industry
Instruction tuned LLM achieving SoTA performance on reasoning, math and general knowledge capabilities
Multilingual LLM with emphasis on European languages supporting regulated use cases including financial services, government, heavy industry
Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.
This blueprint shows how generative AI and accelerated NIM microservices can design protein binders smarter and faster.
Transform PDFs into AI podcasts for engaging on-the-go audio content.
Automate voice AI agents with NVIDIA NIM microservices and Pipecat.
Automate research, and generate blogs with AI Agents using LlamaIndex and Llama3.3-70B NIM LLM.
Generate detailed, structured reports on any topic using LangGraph and Llama3.3 70B NIM
Document your github repositories with AI Agents using CrewAI and Llama3.3 70B NIM.
Multi-modal vision-language model that understands text/img/video and creates informative responses
Generates physics-aware video world states from text and image prompts for physical AI development.
Generates future frames of a physics-aware world state based on simply an image or short video prompt for physical AI development.
Advanced LLM for code generation, reasoning, and fixing across popular programming languages.
Powerful mid-size code model with a 32K context length, excelling in coding in multiple languages.
SAM 2 is a segmentation model that enables fast, precise selection of any object in any video or image.
Powerful LLM designed for creative thinking and writing.
Multilingual and cross-lingual text question-answering retrieval with long context support and optimized data storage efficiency.
Fine-tuned reranking model for multilingual, cross-lingual text question-answering retrieval, with long context support.
State-of-the-art LLM that answers OpenUSD knowledge queries and generates USD-Python code.
Advanced LLM for reasoning, math, general knowledge, and function calling
Context-aware chart extraction that can detect 18 classes for chart basic elements, excluding plot elements.
Model for object detection, fine-tuned to detect charts, tables, and titles in documents.
Model for table extraction that receives an image as input, runs OCR on the image, and returns the text within the image and its bounding boxes.
Converts streamed audio to facial blendshapes for realtime lipsyncing and facial performances.
Create real-time digital twins by combining accelerated solvers, simulation AI, and virtual environments.
Generative downscaling model for generating high resolution regional scale weather fields.
FourCastNet predicts global atmospheric dynamics of various weather / climate variables.
Advanced AI model detects faces and identifies deep fake images.
Ingest massive volumes of live or archived videos and extract insights for summarization and interactive Q&A
Enhance and modify high-quality compositions using real-time rendering and generative AI output without affecting a hero product asset.
Efficiently refine retrieval results over multiple sources and languages.
World-class multilingual and cross-lingual question-answering retrieval.
Create intelligent virtual assistants for customer service across every industry
A bilingual Hindi-English SLM for on-device inference, tailored specifically for Hindi Language.
Detects jailbreaking, bias, violence, profanity, sexual content, and unethical behavior
Advanced Small Language Model supporting RAG, summarization, classification, code, and agentic AI
Highly efficient Mixture of Experts model for RAG, summarization, entity extraction, and classification
Shutterstock Generative 3D service for 360 HDRi generation. Trained on NVIDIA Edify using Shutterstock’s licensed creative libraries.
Llama-3.1-Nemotron-70B-Instruct is a large language model customized by NVIDIA in order to improve the helpfulness of LLM generated responses.
Efficient hybrid state-space model designed for conversational and reasoning tasks.
Rapidly identify and mitigate container security vulnerabilities with generative AI.
Sovereign AI model trained on Japanese language that understands regional nuances.
Sovereign AI model trained on Japanese language that understands regional nuances.
Enhance speech by correcting common audio degradations to create studio quality speech output.
State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.
Leaderboard topping reward model supporting RLHF for better alignment with human preferences.
Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.
Cutting-edge vision-language model exceling in high-quality reasoning from images.
Cutting-edge vision-Language model exceling in high-quality reasoning from images.
Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.
Unique language model that delivers an unmatched accuracy-efficiency performance.
Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.
Fine-tuned Llama 3.1 70B model for code generation, summarization, and multi-language tasks.
Predicts the 3D structure of a protein from its amino acid sequence.
Generates consistent characters across a series of images without requiring additional training.
Multi-modal vision-language model that understands text/img/video and creates informative responses
Robust image classification model for detecting and managing AI-generated content.
Ingest and extract highly accurate insights contained in text, graphs, charts, and tables within massive volumes of PDF documents.
Predicts the 3D structure of a protein from its amino acid sequence.
Create intelligent, interactive avatars for customer service across industries
Sovereign AI model finetuned on Traditional Mandarin and English data using the Llama-3 architecture.
Sovereign AI model trained on Japanese language that understands regional nuances.
This blueprint shows how generative AI and accelerated NIM microservices can design optimized small molecules smarter and faster.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
Cutting-edge MOE based LLM designed to excel in a wide array of generative AI tasks.
Cutting-edge MOE based LLM designed to excel in a wide array of generative AI tasks.
Optimized SLM for on-device inference and fine-tuned for roleplay, RAG and function calling
State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.
Advanced LLM based on Mixture of Experts architecure to deliver compute efficient content generation
Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments
NV-DINOv2 is a visual foundation model that generates vector embeddings for the input image.
Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.
Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.
Grounding dino is an open vocabulary zero-shot object detection model.
An enterprise-grade text-to-image model trained on a compliant dataset produces high quality images.
Natural, high-fidelity, English voices for personalizing text-to-speech services and voiceovers
Enable smooth global interactions in 32 languages.
Expressive and engaging English voices for Q&A assistants, brand ambassadors, and service robots
Record-setting accuracy and performance for English transcription.
State-of-the-art accuracy and speed for English transcriptions.
ProteinMPNN is a deep learning model for predicting amino acid sequences for protein backbones.
Vision foundation model capable of performing diverse computer vision and vision language tasks.
Specialized LLM for financial analysis, reporting, and data processing
Guardrail model to ensure that responses from LLMs are appropriate and safe
Advanced small language generative AI model for edge applications
AI-powered search for OpenUSD data, 3D models, images, and assets using text or image-based inputs.
Shutterstock Generative 3D service for 3D asset generation. Trained on NVIDIA Edify using Shutterstock’s licensed creative libraries
Getty Images’ API service for 4K image generation. Trained on NVIDIA Edify using Getty Images' commercially safe creative libraries.
Estimate gaze angles of a person in a video and redirect to make it frontal.
Create facial animations using a portrait photo and synchronize mouth movement with audio.
Verify compatibility of OpenUSD assets with instant RTX render and rule-based validation.
Supports Chinese and English languages to handle tasks including chatbot, content generation, coding, and translation.
Model for writing and interacting with code across a wide range of programming languages and tasks.
Support Chinese and English chat, coding, math, instruction following, solving quizzes
Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks.
Powers complex conversations with superior contextual understanding, reasoning and text generation.
Advanced state-of-the-art model with language understanding, superior reasoning, and text generation.
Most advanced language model for reasoning, code, multilingual tasks; runs on a single GPU.
Multilingual text reranking model.
English text embedding model for question-answering retrieval.
Multilingual text question-answering retrieval, transforming textual information into dense vector representations.
Powerful coding model offering advanced capabilities in code generation, completion, and infilling
Cutting-edge lightweight open language model exceling in high-quality reasoning.
Advanced programming model for code completion, summarization, and generation
Advanced programming model for code completion, summarization, and generation
Cutting-edge text generation model text understanding, transformation, and code generation.
Cutting-edge text generation model text understanding, transformation, and code generation.
Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines.
Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines.
Grades responses on five attributes helpfulness, correctness, coherence, complexity and verbosity.
Powerful model trained on English and Chinese for diverse tasks including chatbot and creative writing.
Creates diverse synthetic data that mimics the characteristics of real-world data.
This LLM follows instructions, completes requests, and generates creative text.
Advanced text-to-image model for generating high quality images
OCDNet and OCRNet are pre-trained models designed for optical character detection and recognition respectively.
Leading LLM for accurate, contextually relevant responses in the medical domain.
Leading LLM for accurate, contextually relevant responses in the medical domain.
Generates high-quality numerical embeddings from text inputs.
Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics.
Embedding model for text retrieval tasks, excelling in dense, multi-vector, and sparse retrieval.
LLM for improved language comprehension and chatbot-oriented capabilities in Traditional Chinese.
Visual Changenet detects pixel-level change maps between two images and outputs a semantic change segmentation mask
Advanced programming model for code generation, completion, reasoning, and instruction following.
Software programming LLM for code generation, completion, explanation, and multi-turn conversion.
Software programming LLM for code generation, completion, explanation, and multi-turn conversion.
EfficientDet-based object detection network to detect 100 specific retail objects from an input video.
A generative model of protein backbones for protein binder design.
Cutting-edge lightweight open language model exceling in high-quality reasoning.
Long context cutting-edge lightweight open language model exceling in high-quality reasoning.
Cutting-edge lightweight open language model exceling in high-quality reasoning.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
LLM to represent and serve the linguistic and cultural diversity of Southeast Asia
Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.
A general-purpose LLM with state-of-the-art performance in language understanding, coding, and RAG.
Optimized community model for text embedding.
Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.
An MOE LLM that follows instructions, completes requests, and generates creative text.
Powers complex conversations with superior contextual understanding, reasoning and text generation.
Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.
Novel recurrent architecture based language model for faster inference when generating long sequences.
Cutting-edge model built on Google's Gemma-7B specialized for code generation and code completion.
Lightweight language model deployable on laptop, desktop or the cloud for summarization and reasoning.
GPU-accelerated generation of text embeddings used for question-answering retrieval.
GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.
Generate images and stunning visuals with realistic aesthetics.
VISTA-3D is a specialized interactive foundation model for segmenting and anotating human anatomies.
Cutting-edge text generation model text understanding, transformation, and code generation.
LLM capable of generating code from natural language and vice versa.
This LLM follows instructions, completes requests, and generates creative text.
Generate BAM output given one or more pairs of FASTQ files, by running BWA-MEM & GATK best practices.
Run Google's DeepVariant optimized for GPU. Switch models for high accuracy on all major sequencers.
Stable Video Diffusion (SVD) is a generative diffusion model that leverages a single image as a conditioning frame to synthesize video sequences.
A fast generative text-to-image model that can synthesize photorealistic images from a text prompt in a single network evaluation
An MOE LLM that follows instructions, completes requests, and generates creative text.