NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2025 NVIDIA Corporation

nvidia

cosmos-reason1-7b

Run Anywhere

Reasoning vision language model (VLM) for physical AI and robotics.

Physical AIautonomous vehiclesindustrialreasoningroboticssmart citiesSynthetic Data Generationvideo understandingvision language model
Download Now

Cosmos-Reason1-7B Overview

Description

NVIDIA Cosmos Reason – an open, customizable, 7B-parameter reasoning vision language model (VLM) for physical AI and robotics - enables robots and vision AI agents to reason like humans, using prior knowledge, physics understanding and common sense to understand and act in the real world. This model understands space, time, and fundamental physics, and can serve as a planning model to reason what steps an embodied agent might take next.

Cosmos Reason excels at navigating the long tail of diverse scenarios of the physical world with spatial-temporal understanding. Cosmos Reason is post-trained with physical common sense and embodied reasoning data with supervised fine-tuning and reinforcement learning. It uses chain-of-thought reasoning capabilities to understand world dynamics without human annotations.

Given a video and a text prompt, the model first converts the video into tokens using a vision encoder and a special translator called a projector. These video tokens are combined with the text prompt and fed into the core model, which uses a mix of LLM modules and techniques. This enables the model to think step-by-step and provide detailed, logical responses.

Cosmos Reason can be used for robotics and physical AI applications including:

Data curation and annotation — Enable developers to automate high-quality curation and annotation of massive, diverse training datasets. Robot planning and reasoning — Act as the brain for deliberate, methodical decision-making in a robot vision language action (VLA) model. Now robots such as humanoids and autonomous vehicles can interpret environments and given complex commands, break them down into tasks and execute them using common sense, even in unfamiliar environments. Video analytics AI agents — Extract valuable insights and perform root-cause analysis on massive volumes of video data. These agents can be used to analyze and understand recorded or live video streams across city and industrial operations.

The model is ready for commercial use.

Model Developer: NVIDIA

License and Terms of Use:

GOVERNING TERMS: This trial service is governed by the NVIDIA API Trial Terms of Service. Use of the model is governed by the NVIDIA Open Model License Agreement.Additional Information: Apache License 2.0

Models are commercially usable.

You are free to create and distribute Derivative Models. NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.

Important Note: If You bypass, disable, reduce the efficacy of, or circumvent any technical limitation, safety guardrail or associated safety guardrail hyperparameter, encryption, security, digital rights management, or authentication mechanism (collectively “Guardrail”) contained in the Model without a substantially similar Guardrail appropriate for your use case, your rights under this Agreement NVIDIA Open Model License will automatically terminate.

Deployment Geography:

Global

Use Case:

Robotics engineers and AI researchers developing embodied agents—such as autonomous vehicles and robotic systems—would use Physical AI to equip their machines with spatiotemporal reasoning and a fundamental physics understanding for navigation, manipulation, and decision-making tasks

Reference: Technical Paper

Release Date:

Build.NVIDIA.com 08/11/2025 via link

Huggingface 08/01/2025 via link

Model Architecture:

Architecture Type: A Multi-modal LLM consists of a Vision Transformer (ViT) for vision encoder and a Dense Transformer model for LLM.

Network Architecture: Qwen2.5-VL-7B-Instruct.

Cosmos-Reason-7B is post-trained based on Qwen2.5-VL-7B-Instruct and follows the same model architecture.

Number of model parameters:

Cosmos-Reason1-7B:

  • Vision Transformer (ViT): 675.76M (675,759,104)
  • Language Model (LLM): 7.07B (7,070,619,136)
  • Other components (output projection layer): 545.00M (544,997,376)

Computational Load:

  • Cumulative Compute: 3.2603016e+21 FLOPS
  • Estimated Energy and Emissions for Model Training:
  • Total kWh = 16658432
  • Total Emissions (tCO2e) = 5380.674

Input:

Input Types: Text, Video
Input Formats: Text, Video
Input Parameters: Text: One Dimensional (1D), Video (3D)
Other Input Properties:

  • Use FPS=4 for input video to match the training setup.
  • Append Answer the question in the following format: <think>\nyour reasoning\n</think>\n\n<answer>\nyour answer\n</answer>. in the system prompt to encourage long chain-of-thought reasoning response.

Input Context Length (ISL): 128K

Output:

Output Type: Text
Output Format: String
Output Parameters: One Dimensional (1D)
Other Output Properties:

  • Recommend using 4096 or more output max tokens to avoid truncation of long chain-of-thought response.

  • Our AI model recognizes timestamps added at the bottom of each frame for accurate temporal localization.

  • Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.__

Software Integration:

Runtime Engines:

  • vLLM

Supported Hardware:

  • NVIDIA Blackwell:
  • NVIDIA Hopper:

Training, Testing, and Evaluation Datasets:

05/17/2025

Please see our technical paper for detailed evaluations on physical common sense and embodied reasoning. Part of the evaluation datasets are released under Cosmos-Reason1-Benchmark. The embodied reasoning datasets and benchmarks focus on the following areas: robotics (RoboVQA, BridgeDataV2, Agibot, RobFail), ego-centric human demonstration (HoloAssist), and Autonomous Vehicle (AV) driving video data. The AV dataset is collected and annotated by NVIDIA.

All datasets go through the data annotation process described in the technical paper to prepare training and evaluation data and annotations.

08/01/2025

We enhance the model capability with the augmented training data. PLM-Video-Human and Nexar are used to enable dense temporal captioning. Describe Anything is added to enhance a set of mark (SoM) prompting. We enrich data in intelligent transportation systems (ITS) and warehouse applications.

Lastly, Visual Critics dataset contains a collection of AI generated videos from Cosmos-Predict2 and Wan2.1 with human annotations to describe the physical correctness in AI videos.

Training Dataset

  • Data Modalities: Video, Text
  • Text Training Data Size: Not specified (included in multimodal inputs)
  • Video Training Data Size: Less than 10,000 Hours (~2M video-text pairs in SFT + RL datasets)

Data Collection Method by dataset:

  • RoboVQA: Hybrid: Automatic/Sensors
  • BridgeDataV2: Automatic/Sensors
  • AgiBot: Automatic/Sensors
  • RoboFail: Automatic/Sensors
  • HoloAssist: Human
  • AV: Automatic/Sensors
  • PLM-Video-Human: Human
  • Nexar: Automatic/Sensors
  • Describe Anything: Human
  • ITS / Warehouse: Human, Automatic
  • Visual Critics: Automatic

Labeling Method by dataset:

  • RoboVQA: Hybrid: Human,Automated
  • BridgeDataV2: Hybrid: Human,Automated
  • AgiBot: Hybrid: Human,Automated
  • RoboFail: Hybrid: Human,Automated
  • HoloAssist: Hybrid: Human,Automated
  • AV: Hybrid: Human,Automated
  • PLM-Video-Human: Human,Automated
  • Nexar: Human
  • Describe Anything: Human,Automated
  • ITS / Warehouse: Human, Automated
  • Visual Critics: Human,Automated

Evaluation Dataset

  • RoboVQA: Hybrid: Automatic/Sensors
  • BridgeDataV2: Automatic/Sensors
  • AgiBot: Automatic/Sensors
  • RoboFail: Automatic/Sensors
  • HoloAssist: Human
  • AV: Automatic/Sensors

Labeling Method by dataset:

  • RoboVQA: Hybrid: Human,Automated
  • BridgeDataV2: Hybrid: Human,Automated
  • AgiBot: Hybrid: Human,Automated
  • RoboFail: Hybrid: Human,Automated
  • HoloAssist: Hybrid: Human,Automated
  • AV: Hybrid: Human,Automated

Evaluation Benchmark Results:

We report the model accuracy on the embodied reasoning benchmark introduced in Cosmos-Reason1. The results differ from those presented in Table 9 due to additional training aimed at supporting a broader range of Physical AI tasks beyond the benchmark.

Evaluation Results

DatasetRoboVQAAVBridgeDataV2AgibotHoloAssistRoboFailAverage
Accuracy87.370.863.748.962.757.265.1

Dataset Format

Modality: Video (mp4) and Text

Dataset Quantification

05/17/2025

We release the embodied reasoning data and benchmarks. Each data sample is a pair of video and text. The text annotations include understanding and reasoning annotations described in the Cosmos-Reason1 paper. Each video may have multiple text annotations. The quantity of the video and text pairs is described in the table below. The AV data is currently unavailable and will be uploaded soon!

Dataset TypeRoboVQAAVBridgeDataV2AgibotHoloAssistRoboFailTotal Storage Size
SFT Data1.14m24.7k258k38.9k273kN/A300.6GB
RL Data252200240200200N/A2.6GB
Benchmark Data1101001001001001001.5GB

We release text annotations for all embodied reasoning datasets and videos for RoboVQA and AV datasets. For other datasets, users may download the source videos from the original data source and find corresponding video sources via the video names. The held-out RoboFail benchmark is released for measuring the generalization capability.

08/01/2025

Dataset TypePLM-Video-HumanNexarDescribe Anything[ITS / Warehouse]Visual CriticsTotal Storage Size
SFT Data39k240k178k24k24k2.6TB

Inference

Test Hardware: H100

Note: We suggest using fps=4 for the input video and max_tokens=4096 to avoid truncated response.

from transformers import AutoProcessor
from vllm import LLM, SamplingParams
from qwen_vl_utils import process_vision_info

# You can also replace the MODEL_PATH by a safetensors folder path mentioned above
MODEL_PATH = "nvidia/Cosmos-Reason1-7B"

llm = LLM(
    model=MODEL_PATH,
    limit_mm_per_prompt={"image": 10, "video": 10},
)

Sampling Parameters Configuration

sampling_params = SamplingParams(
    temperature=0.6,
    top_p=0.95,
    repetition_penalty=1.05,
    max_tokens=4096,
)

Video Message Setup

video_messages = [
    {"role": "system", "content": "You are a helpful assistant. Answer the question in the following format: <think>\nyour reasoning\n</think>\n\n<answer>\nyour answer\n</answer>."},
    {"role": "user", "content": [
            {"type": "text", "text": (
                    "Is it safe to turn right?"
                )
            },
            {
                "type": "video", 
                "video": "file:///path/to/your/video.mp4",
                "fps": 4,
            }
        ]
    },
]

Message Processing and Generation

# Here we use video messages as a demonstration
messages = video_messages

processor = AutoProcessor.from_pretrained(MODEL_PATH)
prompt = processor.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)
image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)

mm_data = {}
if image_inputs is not None:
    mm_data["image"] = image_inputs
if video_inputs is not None:
    mm_data["video"] = video_inputs

llm_inputs = {
    "prompt": prompt,
    "multi_modal_data": mm_data,

    # FPS will be returned in video_kwargs
    "mm_processor_kwargs": video_kwargs,
}

outputs = llm.generate([llm_inputs], sampling_params=sampling_params)
generated_text = outputs[0].outputs[0].text

print(generated_text)

Key Inference Notes

  • FPS Requirement: Use FPS=4 for input video to match the training setup
  • System Prompt: Append "Answer the question in the following format: <think>\nyour reasoning\n</think>\n\n<answer>\nyour answer\n</answer>." in the system prompt to encourage long chain-of-thought reasoning response
  • Output Tokens: Recommend using 4096 or more output max tokens to avoid truncation of long chain-of-thought response
  • Hardware: Model is designed and optimized to run on NVIDIA GPU-accelerated systems
  • Precision: Tested with BF16 precision for inference
  • Operating System: Tested on Linux (other operating systems not tested)

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please report security vulnerabilities or NVIDIA AI Concerns here