NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

Qwen

10 results
Qwen

qwen3.5-122b-a10b

122B MoE LLM (10B active) for coding, reasoning, multimodal chat. Agent-ready.
Model
tool calling
1.13M
5d
Qwen

qwen3.5-397b-a17b

Next-gen Qwen 3.5 VLM (400B MoE) brings advanced vision, chat, RAG, and agentic capabilities.
Model
MoE
6.9M
3w
Qwen

qwen3-next-80b-a3b-instruct

Qwen3-Next Instruct blends hybrid attention, sparse MoE, and stability boosts for ultra-long context AI.
Model
chat
11.63M
5mo
Qwen

qwen3-next-80b-a3b-thinking

80B parameter AI model with hybrid reasoning, MoE architecture, support for 119 languages.
Model
Reasoning
4.04M
6mo
Qwen

qwen3-coder-480b-a35b-instruct

Excels in agentic coding and browser use and supports 256K context, delivering top results.
Model
agentic coding
3.92M
6mo
Qwen

qwq-32b

Powerful reasoning model capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems.
Model
coding
4.13M
8mo
Qwen

qwen2.5-7b-instruct

Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.
Model
Chinese Language Generation
1.43M
9mo
Qwen

qwen2.5-coder-32b-instruct

Advanced LLM for code generation, reasoning, and fixing across popular programming languages.
Model
code completion
5.59M
8mo
Qwen

qwen2.5-coder-7b-instruct

Powerful mid-size code model with a 32K context length, excelling in coding in multiple languages.
Model
code completion
566K
9mo
Qwen

qwen2-7b-instruct

Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.
Model
Chinese Language Generation
665K
9mo