NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

moonshotai

kimi-k2.5

1T multimodal MoE for high‑capacity video and image understanding with efficient inference.

moonshotai

kimi-k2.5

1T multimodal MoE for high‑capacity video and image understanding with efficient inference.

Mixture-of-ExpertsMultimodalReasoningImage-to-Text

Kimi-K2.5

Description

Kimi K2.5 is an open-source, native multimodal agentic model built through continual pretraining on approximately 15 trillion mixed visual and text tokens atop Kimi-K2-Base. It seamlessly integrates vision and language understanding with advanced agentic capabilities, supporting both instant and thinking modes, as well as conversational and agentic paradigms.

This model is ready for commercial/non-commercial use.

Third-Party Community Consideration:

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA Kimi-K2.5 Model Card

License and Terms of Use:

GOVERNING TERMS: This trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Open Model License Agreement. Additional Information: Modified MIT License.

Deployment Geography:

Global

Use Case:

Use Case: Designed for developers and enterprises building multi-modal AI agents for scenario-specific automation, visual analysis applications, advanced web development with autonomous image search and layout iteration, coding assistance, and tool-augmented agentic workflows.

Release Date:

Build.NVIDIA.com: 01/26/2026 via link
Huggingface: 01/26/2026 via link

Reference(s):

References:

  • Moonshot AI Official Website
  • Kimi K2.5 HuggingFace Model Card

Model Architecture:

Architecture Type: Transformer
Network Architecture: Mixture-of-Experts (MoE)
Total Parameters: 1T
Activated Parameters: 32B
Number of Layers: 61 (including 1 Dense layer)
Attention Hidden Dimension: 7168
MoE Hidden Dimension (per Expert): 2048
Number of Attention Heads: 64
Number of Experts: 384
Selected Experts per Token: 8
Number of Shared Experts: 1
Vocabulary Size: 160K
Attention Mechanism: MLA (Multi-head Latent Attention)
Activation Function: SwiGLU
Vision Encoder: MoonViT
Vision Encoder Parameters: 400M

Input:

Input Types: Image, Video, Text
Input Formats: Red, Green, Blue (RGB), String
Input Parameters: Two-Dimensional (2D), One-Dimensional (1D)
Other Input Properties: Supports image, video, PDF, and text inputs. Video input is experimental. Visual features are compressed via spatial-temporal pooling before projection into the LLM.
Input Context Length: 256K tokens

Key Capabilities

  • Native Multimodality: Pre-trained on vision-language tokens, excels in visual knowledge, cross-modal reasoning, and agentic tool use grounded in visual inputs
  • Coding with Vision: Generates code from visual specifications (UI designs, video workflows) and autonomously orchestrates tools for visual data processing
  • Agent Swarm: Transitions from single-agent scaling to a self-directed, coordinated swarm-like execution scheme; decomposes complex tasks into parallel sub-tasks executed by dynamically instantiated, domain-specific agents
  • Multi-modal Agents: Building general agents tailored for unique, scenario-specific automation
  • Advanced Web Development: Using image search tools to autonomously find assets and refine dynamic layouts
  • Visual Analysis: High-level comprehension and reasoning for image and video data
  • Complex Tool Use: Agentic search and tool-augmented workflows

Output:

Output Types: Text
Output Format: String
Output Parameters: One-Dimensional (1D)
Other Output Properties: Generates text responses based on multi-modal inputs including reasoning, analysis, and code generation. Supports both Thinking mode (with reasoning traces) and Instant mode.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration:

Runtime Engines:

  • vLLM
  • SGLang
  • KTransformers

Supported Hardware:

  • NVIDIA Hopper: H100, H200

Preferred Operating Systems: Linux

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

Model Version(s)

Kimi K2.5 v1.0

Training, Testing, and Evaluation Datasets:

Training Dataset

Data Modality: Image, Text, Video
Training Data Collection: Approximately 15 trillion mixed visual and text tokens
Training Labeling: Undisclosed
Training Properties: Continual pretraining on Kimi-K2-Base

Testing Dataset

Testing Data Collection: Undisclosed
Testing Labeling: Undisclosed
Testing Properties: Undisclosed

Evaluation Dataset

Evaluation Data Collection: Automated
Evaluation Labeling: Human
Evaluation Properties: Evaluated using Kimi Vendor Verifier on standard multi-modal benchmarks. Results reported with Thinking mode enabled, temperature=1.0, top-p=0.95, context length 256K tokens.

Evaluation Benchmark Scores
BenchmarkKimi K2.5
(Thinking)
GPT-5.2
(xhigh)
Claude 4.5 Opus
(Extended Thinking)
Gemini 3 Pro
(High Thinking Level)
DeepSeek V3.2
(Thinking)
Qwen3-VL-
235B-A22B-
Thinking
Reasoning & Knowledge
HLE-Full30.134.530.837.525.1†-
HLE-Full
(w/ tools)
50.245.543.245.840.8†-
AIME 202596.110092.895.093.1-
HMMT 2025 (Feb)95.499.492.9*97.3*92.5-
IMO-AnswerBench81.886.378.5*83.1*78.3-
GPQA-Diamond87.692.487.091.982.4-
MMLU-Pro87.186.7*89.3*90.185.0-
Vision & Video
MMMU-Pro78.579.5*74.081.0-69.3
CharXiv (RQ)77.582.167.2*81.4-66.1
MathVision84.283.077.1*86.1*-74.6
MathVista (mini)90.182.8*80.2*89.8*-85.8
ZeroBench99*3*8*-4*
ZeroBench
(w/ tools)
117*9*12*-3*
OCRBench92.380.7*86.5*90.3*-87.5
OmniDocBench 1.588.885.787.7*88.5-82.0*
InfoVQA (val)92.684*76.9*57.2*-89.5
SimpleVQA71.255.8*69.7*69.7*-56.8*
WorldVQA46.328.036.847.4-23.5
VideoMMMU86.685.984.4*87.6-80.0
MMVU80.480.8*77.377.5-71.1
MotionBench70.464.860.370.3--
VideoMME87.482.1-88.4*-79.0
LongVideoBench79.876.567.277.7*-65.6*
LVBench75.9--73.5*-63.6
Coding
SWE-Bench Verified76.880.080.976.273.1-
SWE-Bench Pro50.755.655.4*---
SWE-Bench Multilingual73.072.077.565.070.2-
Terminal Bench 2.050.854.059.354.246.4-
PaperBench63.563.7*72.9*-47.1-
CyberGym41.3-50.639.9*17.3*-
SciCode48.752.149.556.138.9-
OJBench (cpp)57.4-54.6*68.5*54.7*-
LiveCodeBench (v6)85.0-82.2*87.4*83.3-
Long Context
Longbench v261.054.5*64.4*68.2*59.8*-
AA-LCR70.072.3*71.3*65.3*64.3*-
Agentic Search
BrowseComp60.665.837.037.851.4-
BrowseComp
(w/ctx manage)
74.957.859.267.6-
BrowseComp
(Agent Swarm)
78.4-----
WideSearch
(iter-f1)
72.7-76.2*57.032.5*-
WideSearch
(iter-f1 Agent Swarm)
79.0-----
DeepSearchQA77.171.3*76.1*63.2*60.9*-
FinSearchCompT2&T367.8-66.2*49.959.1*-
Seal-057.445.047.7*45.5*49.5*-

Inference

Acceleration Engine: vLLM
Test Hardware: H200

Inference Modes

  • Thinking Mode: Includes reasoning traces with reasoning_content in response. Recommended temperature=1.0.
  • Instant Mode: Direct responses without reasoning traces. Recommended temperature=0.6.

Quantization

The model employs native INT4 weight-only quantization (Group size 32, compressed tensors) optimized for Hopper Architecture.

Model Usage

The usage demos below demonstrate how to call our official API.

For third-party API deployed with vLLM or SGLang, please note that :

NOTE

  • Chat with video content is an experimental feature and is only supported in our official API for now

  • The recommended temperature will be 1.0 for Thinking mode and 0.6 for Instant mode.

  • The recommended top_p is 0.95

  • To use instant mode, you need to pass {'chat_template_kwargs': {"thinking": False}} in extra_body.

Chat Completion

This is a simple chat completion script which shows how to call K2.5 API in Thinking and Instant modes.

import openai
import base64
import requests
def simple_chat(client: openai.OpenAI, model_name: str):
    messages = [
        {'role': 'system', 'content': 'You are Kimi, an AI assistant created by Moonshot AI.'},
        {
            'role': 'user',
            'content': [
                {'type': 'text', 'text': 'which one is bigger, 9.11 or 9.9? think carefully.'}
            ],
        },
    ]
    response = client.chat.completions.create(
        model=model_name, messages=messages, stream=False, max_tokens=4096
    )
    print('===== Below is reasoning_content in Thinking Mode ======')
    print(f'reasoning content: {response.choices[0].message.reasoning_content}')
    print('===== Below is response in Thinking Mode ======')
    print(f'response: {response.choices[0].message.content}')

    # To use instant mode, pass {"thinking" = {"type":"disabled"}}
    response = client.chat.completions.create(
        model=model_name,
        messages=messages,
        stream=False,
        max_tokens=4096,
        extra_body={'thinking': {'type': 'disabled'}},  # this is for official API
        # extra_body= {'chat_template_kwargs': {"thinking": False}}  # this is for vLLM/SGLang
    )
    print('===== Below is response in Instant Mode ======')
    print(f'response: {response.choices[0].message.content}')
Chat Completion with visual content

K2.5 supports Image and Video input.

The following example demonstrates how to call K2.5 API with image input:

import openai
import base64
import requests

def chat_with_image(client: openai.OpenAI, model_name: str):
    url = 'https://huggingface.co/moonshotai/Kimi-K2.5/blob/main/figures/kimi-logo.png'
    image_base64 = base64.b64encode(requests.get(url).content).decode()
    messages = [
        {
            'role': 'user',
            'content': [
                {'type': 'text', 'text': 'Describe this image in detail.'},
                {
                    'type': 'image_url',
                    'image_url': {'url': f'data:image/png;base64, {image_base64}'},
                },
            ],
        }
    ]

    response = client.chat.completions.create(
        model=model_name, messages=messages, stream=False, max_tokens=8192
    )
    print('===== Below is reasoning_content in Thinking Mode ======')
    print(f'reasoning content: {response.choices[0].message.reasoning_content}')
    print('===== Below is response in Thinking Mode ======')
    print(f'response: {response.choices[0].message.content}')

    # Also support instant mode if pass {"thinking" = {"type":"disabled"}}
    response = client.chat.completions.create(
        model=model_name,
        messages=messages,
        stream=False,
        max_tokens=4096,
        extra_body={'thinking': {'type': 'disabled'}},  # this is for official API
        # extra_body= {'chat_template_kwargs': {"thinking": False}}  # this is for vLLM/SGLang
    )
    print('===== Below is response in Instant Mode ======')
    print(f'response: {response.choices[0].message.content}')

    return response.choices[0].message.content

Interleaved Thinking and Multi-Step Tool Call

K2.5 shares the same design of Interleaved Thinking and Multi-Step Tool Call as K2 Thinking. For usage example, please refer to the K2 Thinking documentation.

Known Limitations

  • Model is trained and optimized for Hopper Architecture; Blackwell support is separate NVIDIA development effort
  • Native INT4 quantization
  • Video input is experimental

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please make sure you have proper rights and permissions for all input image content; if image includes people, personal health information, or intellectual property, the image generated will not blur or maintain proportions of image subjects included.

Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.

Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.