
Open Mixture of Experts LLM (230B, 10B active) for reasoning, coding, and tool-use/agent workflows

DeepSeek-V3.1: hybrid inference LLM with Think/Non-Think modes, stronger agents, 128K context, strict function calling.

Qwen3-Next Instruct blends hybrid attention, sparse MoE, and stability boosts for ultra-long context AI.

Follow-on version of Kimi-K2-Instruct with longer context window and enhanced reasoning capabilities

ByteDance open-source LLM with long-context, reasoning, and agentic intelligence.

Excels in agentic coding and browser use and supports 256K context, delivering top results.

High‑efficiency LLM with hybrid Transformer‑Mamba design, excelling in reasoning and agentic tasks.

State-of-the-art open mixture-of-experts model with strong reasoning, coding, and agentic capabilities

Built for agentic workflows, this model excels in coding, instruction following, and function calling

State-of-the-art open model for reasoning, code, math, and tool calling - suitable for edge agents

Leading reasoning and agentic AI accuracy model for PC and edge.

Natural and expressive voices in multiple languages. For voice agents and brand ambassadors.

Latency-optimized language model excelling in code, math, general knowledge, and instruction-following.