Advanced AI model detects faces and identifies deep fake images.
nvidia/llama-3.2-nv-rerankqa-1b-v1
Efficiently refine retrieval results over multiple sources and languages.
nvidia/llama-3.2-nv-embedqa-1b-v1
World-class multilingual and cross-lingual question-answering retrieval.
shutterstock/edify-360-hdri
Shutterstock Generative 3D service for 360 HDRi generation. Trained on NVIDIA Edify using Shutterstock’s licensed creative libraries.
Cutting-edge vision-language model exceling in high-quality reasoning from images.
Cutting-edge vision-Language model exceling in high-quality reasoning from images.
nvidia/vila
Multi-modal vision-language model that understands text/images and generates informative responses
Robust image classification model for detecting and managing AI-generated content.
nvidia/nv-dinov2
NV-DINOv2 is a visual foundation model that generates vector embeddings for the input image.
briaai/BRIA-2.3
An enterprise-grade text-to-image model trained on a compliant dataset produces high quality images.
microsoft/florence-2
Vision foundation model capable of performing diverse computer vision and vision language tasks.
nvidia/usdsearch
AI-powered search for OpenUSD data, 3D models, images, and assets using text or image-based inputs.
Shutterstock/edify-3d
Shutterstock Generative 3D service for 3D asset generation. Trained on NVIDIA Edify using Shutterstock’s licensed creative libraries
GettyImages/edify-image
Getty Images’ API service for 4K image generation. Trained on NVIDIA Edify using Getty Images' commercially safe creative libraries.
nvidia/nv-rerankqa-mistral-4b-v3
GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.
nvidia/nv-embedqa-e5-v5
GPU-accelerated generation of text embeddings used for question-answering retrieval.
nvidia/nv-embedqa-mistral-7b-v2
GPU-accelerated generation of text embeddings used for question-answering retrieval.
nvidia/maisi
MAISI is a pre-trained volumetric (3D) CT Latent Diffusion Generative Model.
nvidia/nvclip
NV-CLIP is a multimodal embeddings model for image and text.
stabilityai/stable-diffusion-3-medium
Advanced text-to-image model for generating high quality images
nvidia/ocdrnet
OCDNet and OCRNet are pre-trained models designed for optical character detection and recognition respectively.
baai/bge-m3
Embedding model for text retrieval tasks, excelling in dense, multi-vector, and sparse retrieval.
nvidia/visual-changenet
Visual Changenet detects pixel-level change maps between two images and outputs a semantic change segmentation mask
nvidia/retail-object-detection
EfficientDet-based object detection network to detect 100 specific retail objects from an input video.
microsoft/phi-3-vision-128k-instruct
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
snowflake/arctic-embed-l
GPU-accelerated generation of text embeddings.
nvidia/embed-qa-4
GPU-accelerated generation of text embeddings used for question-answering retrieval.
nvidia/rerank-qa-mistral-4b
GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.
google/deplot
One-shot visual language understanding model that translates images of plots into tables.
nvidia/vista-3d
VISTA-3D is a specialized interactive foundation model for segmenting and anotating human anatomies.
stabilityai/stable-video-diffusion
Stable Video Diffusion (SVD) is a generative diffusion model that leverages a single image as a conditioning frame to synthesize video sequences.
stabilityai/sdxl-turbo
A fast generative text-to-image model that can synthesize photorealistic images from a text prompt in a single network evaluation