Multi-modal vision-language model that understands text/img/video and creates informative responses
SAM 2 is a segmentation model that enables fast, precise selection of any object in any video or image.
Create real-time digital twins by combining accelerated solvers, simulation AI, and virtual environments.
Advanced AI model detects faces and identifies deep fake images.
Ingest massive volumes of live or archived videos and extract insights for summarization and interactive Q&A
Cutting-edge vision-language model exceling in high-quality reasoning from images.
Cutting-edge vision-Language model exceling in high-quality reasoning from images.
Multi-modal vision-language model that understands text/img/video and creates informative responses
Robust image classification model for detecting and managing AI-generated content.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
NV-DINOv2 is a visual foundation model that generates vector embeddings for the input image.
Grounding dino is an open vocabulary zero-shot object detection model.
Vision foundation model capable of performing diverse computer vision and vision language tasks.
OCDNet and OCRNet are pre-trained models designed for optical character detection and recognition respectively.
Visual Changenet detects pixel-level change maps between two images and outputs a semantic change segmentation mask
EfficientDet-based object detection network to detect 100 specific retail objects from an input video.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.