NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

nvidia

Relighting

Downloadable

Re-illuminate people in video to match target lighting from a 360 HDRI environment map.

HDRIlightingnvidia ai for mediaremote contribution
Get API Key
API ReferenceAPI Reference
Accelerated by DGX Cloud

Model Overview

Description:

The Video Relighting model re-illuminates a person in a video to match the target lighting provided by a 360 High Dynamic Range Image (HDRI).

Video Relighting is available under NVIDIA AI for Media— a developer platform for deploying AI features that enhance audio, video, and creating new experiences in real-time audio-video communication. NVIDIA AI for Media's state-of-the-art models create high-quality AI effects using standard microphones and cameras without additional special equipment.

NVIDIA AI for Media is exclusively part of NVIDIA AI Enterprise for production workflows — an extensive library of full-stack software, including AI solution workflows, frameworks, pre-trained models, and infrastructure optimization.

This model is ready for commercial/non-commercial use.

License/Terms of Use

Refer to NVIDIA SOFTWARE LICENSE AGREEMENT and Product-Specific Terms for NVIDIA AI Products. Use of the models is governed by the NVIDIA Open Model License.

Deployment Geography:

Global

Use Case:

  1. Virtual Production: Match lighting between real subjects and virtual environments for seamless compositing.

  2. Video Conferencing: Adjust participant lighting to match virtual backgrounds or improve appearance.

  3. Content Creation: Enhance lighting consistency when combining footage from different sources.

Release Date:

NGC - 01/06/2026 via https://catalog.ngc.nvidia.com/orgs/nvidia/teams/maxine/models/nvvfxrelighting

References(s):

  • NVIDIA AI for Media

Model Architecture:

Architecture Type: Convolution Neural Network (CNN), Generative Adversarial Networks (GANs) Network Architecture: ResNet, UNET

  • This model was developed based on LUMOS.
  • Number of model parameters 2.06 x 10^7

Input:

Input Type(s): Image Input Format(s): Red, Green, Blue (RGB) Input Parameters: Two-Dimensional (2D)
Other Properties Related to Input: Input image 360p to 4k, Input HDR Image with extension .hdr or .exr. F16/F32 data type.

Output:

Output Type(s): Image Output Format: Red, Green, Blue (RGB) Output Parameters: Two-Dimensional (2D)
Other Properties Related to Output: Output resolution is the same as input.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration:

Runtime Engine(s):

  • AI for Media VFX SDK- 1.1.0.0

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Ampere
  • NVIDIA Blackwell
  • NVIDIA Hopper
  • NVIDIA Lovelace
  • NVIDIA Turing

[Preferred/Supported] Operating System(s):

  • Ubuntu 20.04
  • Ubuntu 22.04
  • Ubuntu 24.04
  • Debian 12
  • Rocky/RHEL 8.*
  • Rocky/RHEL 9.*
  • Windows 10
  • Windows 11

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

Training, Testing, and Evaluation Datasets:

  • The total size (in number of data points): around 700k
  • Total number of datasets: 24
  • Dataset partition: Training [98%], testing [1%], evaluation [1%]

Training Dataset:

Link:

  • Turbosquid assets
  • FFHQ CCBY
  • HDRi Haven HDRI asset
  • hdri-skies.com HDR Data Set
  • Universite Laval HDR Data Set
  • noemotionhdrs.net HDR Data Set
  • HDRMaps.com HDR Data Set
  • PolyHaven HDR
  • TalkingHeads Dataset
  • NV_UpperBody_Dataset
  • RenderPeopleDataset
  • AI Greenscreen Dataset Broadcast feedback website
  • AI Greenscreen Dataset Youtube CCBY
  • InternalCapture-GazeRedirection dataset
  • Youtube-ccby dataset
  • TripleGanger dataset

Data Collection Method by dataset Hybrid: Human and Synthetic

Labeling Method by dataset Hybrid: Automated and Synthetic

Properties (Quantity, Dataset Descriptions, Sensor(s)): Synthetic dataset contains 600k training samples of image, HDRIs and intermediate ground truth under paired lighting conditions. Real dataset contains around 40k images and 50k videos of people in different lighting environments. HDR dataset contains over 8000 HDR images.

Data Modality

  • Image
  • Envrionment map

Image Training Data Size

  • Less than a Million images

Testing Dataset:

Link:

  • PolyHaven HDR
  • NV_UpperBody_Dataset
  • AI Greenscreen Dataset Youtube CCBY
  • InternalCapture-GazeRedirection dataset
  • Youtube-ccby dataset

Data Collection Method by dataset Human

Labeling Method by dataset Automated

Properties (Quantity, Dataset Descriptions, Sensor(s)): Around 200 videos with various lengths of a single person in front of the camera conducting a video conference or broadcasting. The dataset varies in terms of quality, lighting, head pose, gaze angles and other diversity factors such as race, eye color, and gender.

Evaluation Dataset:

Link:

  • PolyHaven HDR
  • TripleGanger dataset

Data Collection Method by dataset Synthetic

Labeling Method by dataset Synthetic

Properties (Quantity, Dataset Descriptions, Sensor(s)): 50 sets of image-albedo pairs rendered using TripleGanger assets and 600 Polyhaven HDRI images using Omniverse Kit.

Inference:

Engine: TensorRT
Test Hardware:

  • T4, A10, A40, A100, L4, L40, B40
  • RTX 2060, 3070, 4090, 5080

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please make sure you have proper rights and permissions for all input image and video content; if image or video includes people, personal health information, or intellectual property, the image or video generated will not blur or maintain proportions of image subjects included.

For more detailed information on ethical considerations for this model, please see the Model Card++ Bias, Explainability, Safety & Security, and Privacy Subcards.

Please report security vulnerabilities or NVIDIA AI Concerns here.