
122B MoE LLM (10B active) for coding, reasoning, multimodal chat. Agent-ready.
Qwen3.5-122B-A10B is a multimodal vision-language Mixture-of-Experts model designed for native multimodal agent applications, supporting text, image, and video inputs. It integrates multimodal learning, architectural efficiency, and reinforcement learning at scale to improve performance across reasoning, coding, agents, and visual understanding.
This model is ready for commercial/non-commercial use.
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA Qwen3.5-122B-A10B Model Card
GOVERNING TERMS: This trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Open Model License Agreement. Additional Information: Apache License, Version 2.0.
Global
Use Case: Developers and enterprises can use Qwen3.5-122B-A10B for multimodal reasoning, coding and tool use, agentic workflows, and visual understanding tasks over images and video.
Build.NVIDIA.com: 03/06/2026 via [link]([TO BE PROVIDED BY DEVELOPER])
Huggingface: 02/24/2026 via link
References:
Architecture Type: Transformer
Network Architecture: Qwen (Mixture-of-Experts)
Total Parameters: 122B
Active Parameters: 10B
Vocabulary Size: 248,320
Input Types: Text, Image, Video
Input Formats: Text: String; Image: Red, Green, Blue (RGB); Video: mp4, mov, webm
Input Parameters: One Dimensional (1D), Two Dimensional (2D), Three Dimensional (3D)
Other Input Properties: Natively supports up to 262,144 tokens of context and is extensible to 1,010,000 tokens with YaRN scaling.
Input Context Length (ISL): 262,144
Output Types: Text
Output Format: String
Output Parameters: One Dimensional (1D)
Other Output Properties: Generates text responses for multimodal chat and agent workflows.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Runtime Engines:
Supported Hardware:
Preferred/Supported Operating Systems: Linux
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
Qwen3.5-122B-A10B
Data Modality: Undisclosed
Training Data Collection: Undisclosed
Training Labeling: Undisclosed
Training Properties: Undisclosed
Testing Data Collection: Undisclosed
Testing Labeling: Undisclosed
Testing Properties: Undisclosed
Evaluation Data Collection: Undisclosed
Evaluation Labeling: Undisclosed
Evaluation Properties: Undisclosed
Acceleration Engine: vLLM
Test Hardware: H100
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please make sure you have proper rights and permissions for all input image and video content; if image or video includes people, personal health information, or intellectual property, the image or video generated will not blur or maintain proportions of image subjects included.
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here