Follow-on version of Kimi-K2-Instruct with longer context window and enhanced reasoning capabilities

Follow-on version of Kimi-K2-Instruct with longer context window and enhanced reasoning capabilities
Kimi-K2-Instruct-0905 is the latest, most capable version of Kimi K2, a state-of-the-art Mixture-of-Experts (MoE) language model with 1 trillion total parameters and 32 billion active parameters. It delivers enhanced agentic coding intelligence, improved frontend coding experience, and supports extended context lengths of 256k tokens, enabling long-horizon tasks, tool calling, and chat completion.
This model is ready for commercial use.
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA Kimi-K2-Instruct-0905 Model Card.
GOVERNING TERMS: The trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Open Model License Agreement. Additional Information: Modified MIT License.
Deployment Geography: Global
Build.NVIDIA.com 09/22/2025 via link
Huggingface 09/05/2024 via link
References:
Architecture Type: Mixture-of-Experts (MoE) language model
Network Architecture: Transformer-based with MLA attention mechanism
Total Parameters: 1T (1 trillion)
Active Parameters: 32B (32 billion)
Vocabulary Size: 160K
Base Model: Kimi K2
Input Types: Text
Input Formats: Natural language prompts, conversational messages, tool calling requests
Input Parameters: [One-Dimensional (1D)]
Other Input Properties: Max Input Tokens: 256K, Support for tool calling, chat completion, extended context processing
Input Context Length (ISL): 256K tokens
Output Type: Text
Output Format: Natural language responses, structured tool calls, code generation
Output Parameters: [One-Dimensional (1D)]
Other Output Properties: Max Output Tokens: Configurable, Tool calling capabilities, Code generation, Conversational responses
Output Context Length (OSL): Configurable based on remaining context
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Runtime Engines: SGLang
Supported Hardware:
Operating Systems: Linux
Kimi-K2-Instruct-0905
Data Modality: Text
Training Data Collection: Undisclosed
Training Labeling: Undisclosed
Training Properties: Pre-trained on large-scale text corpora with mixture-of-experts architecture, enhanced for agentic coding intelligence and tool calling capabilities
Testing Data Collection: Undisclosed
Testing Labeling: Undisclosed
Testing Properties: Regular testing on coding benchmarks and agentic intelligence tasks
Evaluation Benchmark Score: SWE-Bench verified: 69.2 ± 0.63, SWE-Bench Multilingual: 55.9 ± 0.72, Multi-SWE-Bench: 33.5 ± 0.28, Terminal-Bench: 44.5 ± 2.03, SWE-Dev: 66.6 ± 0.72
Evaluation Data Collection: Undisclosed
Evaluation Labeling: Undisclosed
Evaluation Properties: Evaluated on coding benchmarks with mean ± std over five independent runs
Acceleration Engine: SGLang
Test Hardware: H100
Key features include:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here