Explore
Models
Blueprints
GPUs
Docs
⌘K
Ctrl+K
?
Login
17 results for
Filters
Models (17)
Blueprints (0)
Other (1)
Sort By
score:DESC
Best Match
Z.ai
glm5
GLM-5 744B MoE enables efficient reasoning for complex systems and long-horizon agentic tasks.
Model
MoE
+3
8.55M
3w
Qwen
qwen3.5-397b-a17b
Next-gen Qwen 3.5 VLM (400B MoE) brings advanced vision, chat, RAG, and agentic capabilities.
Model
MoE
+4
6.9M
3w
NVIDIA
nemotron-3-nano-30b-a3b
Open, efficient MoE model with 1M context, excelling in coding, reasoning, instruction following, tool calling, and more
Model
MoE
+4
12.92M
2mo
Qwen
qwen3-coder-480b-a35b-instruct
Excels in agentic coding and browser use and supports 256K context, delivering top results.
Model
agentic coding
+4
3.92M
6mo
Meta
llama-4-scout-17b-16e-instruct
A multimodal, multilingual 16 MoE model with 17B parameters.
Model
language generation
+4
179K
7mo
OpenAI
gpt-oss-120b
Mixture of Experts (MoE) reasoning LLM (text-only) designed to fit within 80GB GPU.
Model
text-to-text
+3
36.48M
7mo
OpenAI
gpt-oss-20b
Smaller Mixture of Experts (MoE) text-only LLM for efficient AI reasoning and math
Model
text-to-text
+3
8.19M
7mo
AI21 Labs
jamba-1.5-mini-instruct
Cutting-edge MOE based LLM designed to excel in a wide array of generative AI tasks.
Model
chat
+2
536K
9mo
Moonshotai
kimi-k2.5
1T multimodal MoE for high‑capacity video and image understanding with efficient inference.
Model
Multimodal
+4
21.97M
1mo
Meta
llama-4-maverick-17b-128e-instruct
A general purpose multimodal, multilingual 128 MoE model with 17B parameters.
Model
language generation
+4
3.28M
7mo
Mistral AI
mixtral-8x22b-instruct-v0.1
An MOE LLM that follows instructions, completes requests, and generates creative text.
Model
Advanced Reasoning
+4
4.86M
7mo
Mistral AI
mixtral-8x7b-instruct-v0.1
An MOE LLM that follows instructions, completes requests, and generates creative text.
Model
Advanced Reasoning
+4
705K
7mo
Qwen
qwen3-next-80b-a3b-instruct
Qwen3-Next Instruct blends hybrid attention, sparse MoE, and stability boosts for ultra-long context AI.
Model
chat
+2
11.63M
5mo
Qwen
qwen3-next-80b-a3b-thinking
80B parameter AI model with hybrid reasoning, MoE architecture, support for 119 languages.
Model
Reasoning
+2
4.04M
6mo
Qwen
qwen3.5-122b-a10b
122B MoE LLM (10B active) for coding, reasoning, multimodal chat. Agent-ready.
Model
tool calling
+4
1.13M
5d
Stepfun-ai
step-3.5-flash
200B open-source reasoning engine with sparse MoE powering frontier agentic AI.
Model
Agentic
+3
7.22M
1mo
Mistral AI
mistral-large-3-675b-instruct-2512
A state-of-the-art general purpose MoE VLM ideal for chat, agentic and instruction based use cases.
Model
language generation
+4
6.36M
3mo
Items per page
24
1
1
of 1 pages