
Qwen3-Next Instruct blends hybrid attention, sparse MoE, and stability boosts for ultra-long context AI.

80B parameter AI model with hybrid reasoning, MoE architecture, support for 119 languages.

Excels in agentic coding and browser use and supports 256K context, delivering top results.

Advanced reasoing MOE mode excelling at reasoning, multilingual tasks, and instruction following

Distilled version of Qwen 2.5 32B using reasoning data generated by DeepSeek R1 for enhanced performance.

Distilled version of Qwen 2.5 14B using reasoning data generated by DeepSeek R1 for enhanced performance.

Distilled version of Qwen 2.5 7B using reasoning data generated by DeepSeek R1 for enhanced performance.

Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.

Advanced LLM for code generation, reasoning, and fixing across popular programming languages.

Powerful mid-size code model with a 32K context length, excelling in coding in multiple languages.

Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.