Open Mixture of Experts LLM (230B, 10B active) for reasoning, coding, and tool-use/agent workflows
Latency-optimized language model excelling in code, math, general knowledge, and instruction-following.