
1T multimodal MoE for high‑capacity video and image understanding with efficient inference.

Open Mixture of Experts LLM (230B, 10B active) for reasoning, coding, and tool-use/agent workflows

Smaller Mixture of Experts (MoE) text-only LLM for efficient AI reasoning and math

Mixture of Experts (MoE) reasoning LLM (text-only) designed to fit within 80GB GPU.

State-of-the-art open mixture-of-experts model with strong reasoning, coding, and agentic capabilities