Shutterstock Generative 3D service for 360 HDRi generation. Trained on NVIDIA Edify using Shutterstock’s licensed creative libraries.
Llama-3.1-Nemotron-70B-Instruct is a large language model customized by NVIDIA in order to improve the helpfulness of LLM generated responses.
State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.
Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.
Cutting-edge vision-language model exceling in high-quality reasoning from images.
Cutting-edge vision-Language model exceling in high-quality reasoning from images.
Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.
Generates consistent characters across a series of images without requiring additional training.
Sovereign AI model finetuned on Traditional Mandarin and English data using the Llama-3 architecture.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
NV-DINOv2 is a visual foundation model that generates vector embeddings for the input image.
Advanced dense LLM with state-of-the-art reasoning, knowledge and coding capabilities.
Grounding dino is an open vocabulary zero-shot object detection model.
An enterprise-grade text-to-image model trained on a compliant dataset produces high quality images.
Natural, high-fidelity, English voices for personalizing text-to-speech services and voiceovers
Enable smooth global interactions in 32 languages.
Expressive and engaging English voices for Q&A assistants, brand ambassadors, and service robots
Record-setting accuracy and performance for English transcription.
State-of-the-art accuracy and speed for English transcriptions.
ProteinMPNN is a deep learning model for predicting amino acid sequences for protein backbones.
Vision foundation model capable of performing diverse computer vision and vision language tasks.
Advanced small language generative AI model for edge applications
State-of-the-art LLM that answers OpenUSD knowledge queries and generates USD-Python code.
AI-powered search for OpenUSD data, 3D models, images, and assets using text or image-based inputs.
Shutterstock Early Access preview of Generative 3D service for 360 HDRi generation. Trained on NVIDIA Edify using Shutterstock’s licensed creative libraries.
Create facial animations using a portrait photo and syncrhronize mouth movement with audio.
Verify compatibility of OpenUSD assets with instant RTX render and rule-based validation.
Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks.
Powers complex conversations with superior contextual understanding, reasoning and text generation.
Advanced state-of-the-art model with language understanding, superior reasoning, and text generation.
Most advanced language model for reasoning, code, multilingual tasks; runs on a single GPU.
GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.
GPU-accelerated generation of text embeddings used for question-answering retrieval.
GPU-accelerated generation of text embeddings used for question-answering retrieval.
MAISI is a pre-trained volumetric (3D) CT Latent Diffusion Generative Model.
Cutting-edge text generation model text understanding, transformation, and code generation.
Cutting-edge text generation model text understanding, transformation, and code generation.
Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines.
Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines.
Model for writing and interacting with code across a wide range of programming languages and tasks.
Creates diverse synthetic data that mimics the characteristics of real-world data.
Advanced text-to-image model for generating high quality images
OCDNet and OCRNet are pre-trained models designed for optical character detection and recognition respectively.
Generates high-quality numerical embeddings from text inputs.
Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics.
Embedding model for text retrieval tasks, excelling in dense, multi-vector, and sparse retrieval.
Visual Changenet detects pixel-level change maps between two images and outputs a semantic change segmentation mask
EfficientDet-based object detection network to detect 100 specific retail objects from an input video.
A generative model of protein backbones for protein binder design.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
Vision language model adept at comprehending text and visual inputs to produce informative responses
Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.
GPU-accelerated generation of text embeddings.
An MOE LLM that follows instructions, completes requests, and generates creative text.
Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.
Excels in complex multilingual reasoning tasks, including text understanding, and code generation.
GPU-accelerated generation of text embeddings used for question-answering retrieval.
GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.
Generate images and stunning visuals with realistic aesthetics.
Groundbreaking multimodal model designed to understand and reason about visual elements in images.
One-shot visual language understanding model that translates images of plots into tables.
Multi-modal vision-language model that understands text/images and generates informative responses
Multi-modal model for a wide range of tasks, including image understanding and language generation.
VISTA-3D is a specialized interactive foundation model for segmenting and anotating human anatomies.
LLM capable of generating code from natural language and vice versa.
Generate BAM output given one or more pairs of FASTQ files, by running BWA-MEM & GATK best practices.
Run Google's DeepVariant optimized for GPU. Switch models for high accuracy on all major sequencers.
Stable Video Diffusion (SVD) is a generative diffusion model that leverages a single image as a conditioning frame to synthesize video sequences.
A fast generative text-to-image model that can synthesize photorealistic images from a text prompt in a single network evaluation
AI based weather prediction pipeline with global models and downscaling models.
MolMIM performs controlled generation, finding molecules with the right properties.
Predicts the 3D structure of a protein from its amino acid sequence.
Predicts the 3D structure of how a molecule interacts with a protein.
Converts streamed audio to facial blendshapes for realtime lipsyncing and facial performances.
An MOE LLM that follows instructions, completes requests, and generates creative text.
World-record accuracy and performance for complex route optimization.