Multi-modal vision-language model that understands text/images and generates informative responses
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
NV-DINOv2 is a visual foundation model that generates vector embeddings for the input image.
Grounding dino is an open vocabulary zero-shot object detection model.
Vision foundation model capable of performing diverse computer vision and vision language tasks.
NvClip generates vector embeddings for the given image or text.
OCDNet and OCRNet are pre-trained models designed for optical character detection and recognition respectively.
Visual Changenet detects pixel-level change maps between two images and outputs a semantic change segmentation mask
EfficientDet-based object detection network to detect 100 specific retail objects from an input video.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
Vision language model adept at comprehending text and visual inputs to produce informative responses
Groundbreaking multimodal model designed to understand and reason about visual elements in images.
Multi-modal vision-language model that understands text/images and generates informative responses
Multi-modal model for a wide range of tasks, including image understanding and language generation.