Explore
Models
Blueprints
GPUs
Docs
⌘K
Ctrl+K
?
Login
11 results for
Filters
Models (11)
Blueprints (0)
Other (0)
Sort By
score:DESC
Best Match
Z.ai
glm5
GLM-5 744B MoE enables efficient reasoning for complex systems and long-horizon agentic tasks.
Model
MoE
+3
7.94M
3w
Stepfun-ai
step-3.5-flash
200B open-source reasoning engine with sparse MoE powering frontier agentic AI.
Model
Agentic
+3
7.29M
1mo
Minimaxai
minimax-m2.1
MiniMax M2.1 excels in multi-language coding, app/web dev, office AI, and agent integration
Model
Agentic
+3
8.38M
1mo
Qwen
qwen3-next-80b-a3b-instruct
Qwen3-Next Instruct blends hybrid attention, sparse MoE, and stability boosts for ultra-long context AI.
Model
chat
+2
11.15M
5mo
Moonshotai
kimi-k2-instruct
State-of-the-art open mixture-of-experts model with strong reasoning, coding, and agentic capabilities
Model
coding
+3
20.23M
7mo
Qwen
qwen3-coder-480b-a35b-instruct
Excels in agentic coding and browser use and supports 256K context, delivering top results.
Model
agentic coding
+4
3.83M
6mo
Qwen
qwen3.5-397b-a17b
Next-gen Qwen 3.5 VLM (400B MoE) brings advanced vision, chat, RAG, and agentic capabilities.
Model
MoE
+4
6.55M
3w
Mistral AI
mistral-large-3-675b-instruct-2512
A state-of-the-art general purpose MoE VLM ideal for chat, agentic and instruction based use cases.
Model
language generation
+4
6.17M
3mo
DeepSeek AI
deepseek-v3.1-terminus
DeepSeek-V3.1: hybrid inference LLM with Think/Non-Think modes, stronger agents, 128K context, strict function calling.
Model
tool calling
+3
13.01M
5mo
Mistral AI
devstral-2-123b-instruct-2512
State-of-the-art open code model with deep reasoning, 256k context, and unmatched efficiency.
Model
coding
+4
5.78M
3mo
Moonshotai
kimi-k2-instruct-0905
Follow-on version of Kimi-K2-Instruct with longer context window and enhanced reasoning capabilities.
Model
long-context
+4
10.04M
5mo
Items per page
24
1
1
of 1 pages