
200B open-source reasoning engine with sparse MoE powering frontier agentic AI.
Step 3.5 Flash is a sparse Mixture-of-Experts (MoE) large language model developed by StepFun, engineered to deliver frontier reasoning and agentic capabilities with exceptional efficiency. Built on 196.81B total parameters with only ~11B active per token, it achieves the reasoning depth of top-tier models while maintaining real-time responsiveness with 100-300 tok/s throughput (peaking at 350 tok/s for coding tasks).
This model is ready for commercial/non-commercial use.
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA Step 3.5 Flash Model Card
GOVERNING TERMS: This trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Open Model License Agreement. Additional Information: Apache License, Version 2.0.
Global
Use Case: Developers and enterprises seeking a high-performance open-weight LLM for coding assistants, deep research agents, GUI automation, and complex multi-step reasoning tasks. The model is optimized for DGX Spark deployment with fast inference speeds and is particularly strong at tool-calling and agentic applications.
Key Features:
Build.NVIDIA.com: 02/2026 via link
Huggingface: 02/2026 via link
References:
Architecture Type: Transformer
Network Architecture: Mixture-of-Experts
Total Parameters: 196.81B (196B Backbone + 0.81B MTP Head)
Active Parameters: ~11B per token
Vocabulary Size: 128,896
Layers: 45
Hidden Size: 4,096
Experts: 288 routed experts + 1 shared expert (always active), Top-8 selection per token
Attention: 3:1 SWA ratio (three sliding-window layers per full-attention layer), window size 512
Input Types: Text
Input Formats: String
Input Parameters: One-Dimensional (1D)
Other Input Properties: Supports multi-turn conversations and tool-calling formats.
Input Context Length (ISL): 256,000
Output Types: Text
Output Format: String
Output Parameters: One-Dimensional (1D)
Other Output Properties: Generates coherent responses for coding, reasoning, and general text generation tasks.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Supported inference frameworks include vLLM, SGLang, llama.cpp, and Hugging Face Transformers.
Runtime Engines:
Supported Hardware:
Preferred Operating Systems: Linux
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
Step 3.5 Flash v1.0
Data Modality: Text
Training Data Collection: Undisclosed
Training Labeling: Undisclosed
Training Properties: Undisclosed
Testing Data Collection: Undisclosed
Testing Labeling: Undisclosed
Testing Properties: Undisclosed
Evaluation Benchmark Score: Step 3.5 Flash achieves frontier-level performance across Agency, Reasoning, and Coding benchmarks. For more information see Detailed Benchmark Comparison Table below.
Evaluation Data Collection: Automated
Evaluation Labeling: Hybrid: Automated, Human
Evaluation Properties: Evaluated on industry-standard benchmarks for coding (SWE-bench Verified, LiveCodeBench-V6, Terminal-Bench 2.0), agentic capabilities (τ²-Bench, BrowseComp, GAIA, xbench-DeepSearch), and mathematical reasoning (AIME 2025, HMMT 2025, IMOAnswerBench).
| Benchmark | Step 3.5 Flash | DeepSeek V3.2 | Kimi K2 Thinking / K2.5 | GLM-4.7 | MiniMax M2.1 | MiMo-V2 Flash |
|---|---|---|---|---|---|---|
| # Activated Params | 11B | 37B | 32B | 32B | 10B | 15B |
| # Total Params (MoE) | 196B | 671B | 1T | 355B | 230B | 309B |
| Est. decoding cost (@ 128K context, Hopper GPU**) | 1.0x (100 tok/s, MTP-3, EP8) | 6.0x (33 tok/s, MTP-1, EP32) | 18.9x (33 tok/s, no MTP, EP32) | 18.9x (100 tok/s, MTP-3, EP8) | 3.9x (100 tok/s, MTP-3, EP8) | 1.2x (100 tok/s, MTP-3, EP8) |
| Agency | ||||||
| τ²-Bench | 88.2 | 80.3 | 74.3* / — | 87.4 | 80.2* | 80.3 |
| BrowseComp | 50.7 | 51.4 | 41.5* / 60.6 | 52.0 | 47.4 | 45.4 |
| BrowseComp (w/ Context Manager) | 69.0 | 67.6 | 60.2 / 74.9 | 67.5 | 62.0 | 58.3 |
| BrowseComp-ZH | 66.9 | 65.0 | 62.3 / 62.3* | 66.6 | 47.8* | 51.2* |
| BrowseComp-ZH (w/ Context Manager) | 73.7 | — | — / — | — | — | — |
| GAIA (no file) | 84.5 | 75.1* | 75.6* / 75.9* | 61.9* | 64.3* | 78.2* |
| xbench-DeepSearch (2025.05) | 83.7 | 78.0* | 76.0* / 76.7* | 72.0* | 68.7* | 69.3* |
| xbench-DeepSearch (2025.10) | 56.3 | 55.7* | — / 40+ | 52.3* | 43.0* | 44.0* |
| ResearchRubrics | 65.3 | 55.8* | 56.2* / 59.5* | 62.0* | 60.2* | 54.3* |
| Reasoning | ||||||
| AIME 2025 | 97.3 | 93.1 | 94.5 / 96.1 | 95.7 | 83.0 | 94.1 (95.1*) |
| HMMT 2025 (Feb.) | 98.4 | 92.5 | 89.4 / 95.4 | 97.1 | 71.0* | 84.4 (95.4*) |
| HMMT 2025 (Nov.) | 94.0 | 90.2 | 89.2* / — | 93.5 | 74.3* | 91.0* |
| IMOAnswerBench | 85.4 | 78.3 | 78.6 / 81.8 | 82.0 | 60.4* | 80.9* |
| Coding | ||||||
| LiveCodeBench-V6 | 86.4 | 83.3 | 83.1 / 85.0 | 84.9 | — | 80.6 (81.6*) |
| SWE-bench Verified | 74.4 | 73.1 | 71.3 / 76.8 | 73.8 | 74.0 | 73.4 |
| Terminal-Bench 2.0 | 51.0 | 46.4 | 35.7* / 50.8 | 41.0 | 47.9 | 38.5 |
Notes:
Acceleration Engine: vLLM
Test Hardware: H100x4
Recommended Inference Settings:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here