FLUX.1 is a state-of-the-art suite of image generation models
A general purpose multimodal, multilingual 128 MoE model with 17B parameters.
A multimodal, multilingual 16 MoE model with 17B parameters.
Create AI agents that reason, plan, reflect and refine to produce high-quality reports based on source materials of your choice.
Generate exponentially large amounts of synthetic motion trajectories for robot manipulation from just a few human demonstrations.
Generates physics-aware video world states from text and image prompts for physical AI development.
Generates future frames of a physics-aware world state based on simply an image or short video prompt for physical AI development.
The NV-EmbedCode model is a 7B Mistral-based embedding model optimized for code retrieval, supporting text, code, and hybrid queries.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
A lightweight, multilingual, advanced SLM text model for edge computing, resource constraint applications
Cutting-edge vision-language model exceling in retrieving text and metadata from images.
Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments
Cutting-edge open multimodal model exceling in high-quality reasoning from image and audio inputs.
This workflow shows how generative AI can generate DNA sequences that can be translated into proteins for bioengineering.
Connect AI applications to multimodal enterprise data with a scalable retrieval augmented generation (RAG) pipeline built on highly performant, industry-leading NIM microservices, for faster PDF data extraction and more accurate information retrieval.
Instruction tuned LLM achieving SoTA performance on reasoning, math and general knowledge capabilities
Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.
This blueprint shows how generative AI and accelerated NIM microservices can design protein binders smarter and faster.
Generate detailed, structured reports on any topic using LangGraph and Llama3.3 70B NIM
Multi-modal vision-language model that understands text/img/video and creates informative responses
Advanced LLM for code generation, reasoning, and fixing across popular programming languages.
Powerful mid-size code model with a 32K context length, excelling in coding in multiple languages.
Powerful LLM designed for creative thinking and writing.
Context-aware chart extraction that can detect 18 classes for chart basic elements, excluding plot elements.
Advanced AI model detects faces and identifies deep fake images.
Create intelligent virtual assistants for customer service across every industry
A bilingual Hindi-English SLM for on-device inference, tailored specifically for Hindi Language.
Highly efficient Mixture of Experts model for RAG, summarization, entity extraction, and classification
Shutterstock Generative 3D service for 360 HDRi generation. Trained on NVIDIA Edify using Shutterstock’s licensed creative libraries.
Llama-3.1-Nemotron-70B-Instruct is a large language model customized by NVIDIA in order to improve the helpfulness of LLM generated responses.
Efficient hybrid state-space model designed for conversational and reasoning tasks.
Sovereign AI model trained on Japanese language that understands regional nuances.
Sovereign AI model trained on Japanese language that understands regional nuances.
State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.
Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.
Cutting-edge vision-language model exceling in high-quality reasoning from images.
Cutting-edge vision-Language model exceling in high-quality reasoning from images.
Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.
Unique language model that delivers an unmatched accuracy-efficiency performance.
Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.
Fine-tuned Llama 3.1 70B model for code generation, summarization, and multi-language tasks.
Generates consistent characters across a series of images without requiring additional training.
Robust image classification model for detecting and managing AI-generated content.
Sovereign AI model finetuned on Traditional Mandarin and English data using the Llama-3 architecture.
Sovereign AI model trained on Japanese language that understands regional nuances.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
Cutting-edge MOE based LLM designed to excel in a wide array of generative AI tasks.
Cutting-edge MOE based LLM designed to excel in a wide array of generative AI tasks.
Optimized SLM for on-device inference and fine-tuned for roleplay, RAG and function calling
State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.
Advanced LLM based on Mixture of Experts architecure to deliver compute efficient content generation
Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments
Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.
Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.
ProteinMPNN is a deep learning model for predicting amino acid sequences for protein backbones.
Vision foundation model capable of performing diverse computer vision and vision language tasks.
Advanced small language generative AI model for edge applications
Shutterstock Generative 3D service for 3D asset generation. Trained on NVIDIA Edify using Shutterstock’s licensed creative libraries
Getty Images’ API service for 4K image generation. Trained on NVIDIA Edify using Getty Images' commercially safe creative libraries.
Supports Chinese and English languages to handle tasks including chatbot, content generation, coding, and translation.
Model for writing and interacting with code across a wide range of programming languages and tasks.
Support Chinese and English chat, coding, math, instruction following, solving quizzes
Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks.
Powers complex conversations with superior contextual understanding, reasoning and text generation.
Advanced state-of-the-art model with language understanding, superior reasoning, and text generation.
Most advanced language model for reasoning, code, multilingual tasks; runs on a single GPU.
Multilingual text question-answering retrieval, transforming textual information into dense vector representations.
Cutting-edge lightweight open language model exceling in high-quality reasoning.
Advanced programming model for code completion, summarization, and generation
Advanced programming model for code completion, summarization, and generation
Cutting-edge text generation model text understanding, transformation, and code generation.
Cutting-edge text generation model text understanding, transformation, and code generation.
Grades responses on five attributes helpfulness, correctness, coherence, complexity and verbosity.
Creates diverse synthetic data that mimics the characteristics of real-world data.
This LLM follows instructions, completes requests, and generates creative text.
Advanced text-to-image model for generating high quality images
Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics.
LLM for improved language comprehension and chatbot-oriented capabilities in Traditional Chinese.
Visual Changenet detects pixel-level change maps between two images and outputs a semantic change segmentation mask
Advanced programming model for code generation, completion, reasoning, and instruction following.
Software programming LLM for code generation, completion, explanation, and multi-turn conversion.
Software programming LLM for code generation, completion, explanation, and multi-turn conversion.
EfficientDet-based object detection network to detect 100 specific retail objects from an input video.
A generative model of protein backbones for protein binder design.
Cutting-edge lightweight open language model exceling in high-quality reasoning.
Long context cutting-edge lightweight open language model exceling in high-quality reasoning.
Cutting-edge lightweight open language model exceling in high-quality reasoning.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
LLM to represent and serve the linguistic and cultural diversity of Southeast Asia
Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.
A general-purpose LLM with state-of-the-art performance in language understanding, coding, and RAG.
Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.
Powers complex conversations with superior contextual understanding, reasoning and text generation.
Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.
Novel recurrent architecture based language model for faster inference when generating long sequences.
Cutting-edge model built on Google's Gemma-7B specialized for code generation and code completion.
GPU-accelerated generation of text embeddings used for question-answering retrieval.
GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.
Generate images and stunning visuals with realistic aesthetics.
This LLM follows instructions, completes requests, and generates creative text.
Stable Video Diffusion (SVD) is a generative diffusion model that leverages a single image as a conditioning frame to synthesize video sequences.
A fast generative text-to-image model that can synthesize photorealistic images from a text prompt in a single network evaluation