Excels in agentic coding and browser use and supports 256K context, delivering top results.
Smaller Mixture of Experts (MoE) text-only LLM for efficient AI reasoning and math
Mixture of Experts (MoE) reasoning LLM (text-only) designed to fit within 80GB GPU.
Advanced reasoing MOE mode excelling at reasoning, multilingual tasks, and instruction following
A general purpose multimodal, multilingual 128 MoE model with 17B parameters.
A multimodal, multilingual 16 MoE model with 17B parameters.
Cutting-edge MOE based LLM designed to excel in a wide array of generative AI tasks.
Cutting-edge MOE based LLM designed to excel in a wide array of generative AI tasks.
Advanced LLM based on Mixture of Experts architecure to deliver compute efficient content generation
An MOE LLM that follows instructions, completes requests, and generates creative text.
An MOE LLM that follows instructions, completes requests, and generates creative text.