
A state-of-the-art general purpose MoE VLM ideal for chat, agentic and instruction based use cases.

DeepSeek-V3.1: hybrid inference LLM with Think/Non-Think modes, stronger agents, 128K context, strict function calling.

Qwen3-Next Instruct blends hybrid attention, sparse MoE, and stability boosts for ultra-long context AI.

Follow-on version of Kimi-K2-Instruct with longer context window and enhanced reasoning capabilities

ByteDance open-source LLM with long-context, reasoning, and agentic intelligence.

Excels in agentic coding and browser use and supports 256K context, delivering top results.

High‑efficiency LLM with hybrid Transformer‑Mamba design, excelling in reasoning and agentic tasks.

State-of-the-art open mixture-of-experts model with strong reasoning, coding, and agentic capabilities

Built for agentic workflows, this model excels in coding, instruction following, and function calling

Leading reasoning and agentic AI accuracy model for PC and edge.