
Smaller Mixture of Experts (MoE) text-only LLM for efficient AI reasoning and math

Mixture of Experts (MoE) reasoning LLM (text-only) designed to fit within 80GB GPU.

High efficiency model with leading accuracy for reasoning, tool calling, chat, and instruction following.

Advanced reasoing MOE mode excelling at reasoning, multilingual tasks, and instruction following

High efficiency model with leading accuracy for reasoning, tool calling, chat, and instruction following.

Lightweight reasoning model for applications in latency bound, memory/compute constrained environments

High performance reasoning model optimized for efficiency and edge deployment

Superior inference efficiency with highest accuracy for scientific and complex math reasoning, coding, tool calling, and instruction following.

Distilled version of Llama 3.1 8B using reasoning data generated by DeepSeek R1 for enhanced performance.

State-of-the-art open model for reasoning, code, math, and tool calling - suitable for edge agents

Latency-optimized language model excelling in code, math, general knowledge, and instruction-following.

Leading reasoning and agentic AI accuracy model for PC and edge.

Advanced LLM for reasoning, math, general knowledge, and function calling

Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.

Instruction tuned LLM achieving SoTA performance on reasoning, math and general knowledge capabilities

Distilled version of Qwen 2.5 14B using reasoning data generated by DeepSeek R1 for enhanced performance.

Distilled version of Qwen 2.5 32B using reasoning data generated by DeepSeek R1 for enhanced performance.

Distilled version of Qwen 2.5 7B using reasoning data generated by DeepSeek R1 for enhanced performance.

Support Chinese and English chat, coding, math, instruction following, solving quizzes

Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.

Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.

Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.

State-of-the-art open model trained on open datasets, excelling in reasoning, math, and science.

Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics.