Explore
Models
Blueprints
GPUs
Docs
⌘K
Ctrl+K
?
Login
10 results for
Filters
Models (10)
Blueprints (0)
Other (0)
Sort By
score:DESC
Best Match
Z.ai
glm5
GLM-5 744B MoE enables efficient reasoning for complex systems and long-horizon agentic tasks.
Model
MoE
+3
8.55M
3w
NVIDIA
nemotron-3-nano-30b-a3b
Open, efficient MoE model with 1M context, excelling in coding, reasoning, instruction following, tool calling, and more
Model
MoE
+4
12.92M
2mo
OpenAI
gpt-oss-120b
Mixture of Experts (MoE) reasoning LLM (text-only) designed to fit within 80GB GPU.
Model
text-to-text
+3
36.48M
7mo
OpenAI
gpt-oss-20b
Smaller Mixture of Experts (MoE) text-only LLM for efficient AI reasoning and math
Model
text-to-text
+3
8.19M
7mo
Moonshotai
kimi-k2.5
1T multimodal MoE for high‑capacity video and image understanding with efficient inference.
Model
Multimodal
+4
21.97M
1mo
Mistral AI
mixtral-8x22b-instruct-v0.1
An MOE LLM that follows instructions, completes requests, and generates creative text.
Model
Advanced Reasoning
+4
4.86M
7mo
Mistral AI
mixtral-8x7b-instruct-v0.1
An MOE LLM that follows instructions, completes requests, and generates creative text.
Model
Advanced Reasoning
+4
705K
7mo
Qwen
qwen3-next-80b-a3b-thinking
80B parameter AI model with hybrid reasoning, MoE architecture, support for 119 languages.
Model
Reasoning
+2
4.04M
6mo
Qwen
qwen3.5-122b-a10b
122B MoE LLM (10B active) for coding, reasoning, multimodal chat. Agent-ready.
Model
tool calling
+4
1.13M
5d
Stepfun-ai
step-3.5-flash
200B open-source reasoning engine with sparse MoE powering frontier agentic AI.
Model
Agentic
+3
7.22M
1mo
Items per page
24
1
1
of 1 pages