
Nemotron Nano 12B v2 VL enables multi-image and video understanding, along with visual Q&A and summarization capabilities.

Reasoning vision language model (VLM) for physical AI and robotics.

Generates physics-aware video world states for physical AI development using text prompts and multiple spatial control inputs derived from real-world data or simulation.

Generates future frames of a physics-aware world state based on simply an image or short video prompt for physical AI development.

Multi-modal vision-language model that understands text/img/video and creates informative responses

Ingest massive volumes of live or archived videos and extract insights for summarization and interactive Q&A

Estimate gaze angles of a person in a video and redirect to make it frontal.

Visual Changenet detects pixel-level change maps between two images and outputs a semantic change segmentation mask

EfficientDet-based object detection network to detect 100 specific retail objects from an input video.